WorldWideScience

Sample records for interior reconstruction algorithm

  1. Reconstruction of a uniform star object from interior x-ray data: uniqueness, stability and algorithm

    International Nuclear Information System (INIS)

    Van Gompel, Gert; Batenburg, K Joost; Defrise, Michel

    2009-01-01

    In this paper we consider the problem of reconstructing a two-dimensional star-shaped object of uniform density from truncated projections of the object. In particular, we prove that such an object is uniquely determined by its parallel projections sampled over a full π angular range with a detector that only covers an interior field-of-view, even if the density of the object is not known a priori. We analyze the stability of this reconstruction problem and propose a reconstruction algorithm. Simulation experiments demonstrate that the algorithm is capable of reconstructing a star-shaped object from interior data, even if the interior region is much smaller than the size of the object. In addition, we present results for a heuristic reconstruction algorithm called DART, that was recently proposed. The heuristic method is shown to yield accurate reconstructions if the density is known in advance, and to have a very good stability in the presence of noisy projection data. Finally, the performance of the DBP and DART algorithms is illustrated for the reconstruction of real micro-CT data of a diamond

  2. SU-E-I-73: Clinical Evaluation of CT Image Reconstructed Using Interior Tomography

    International Nuclear Information System (INIS)

    Zhang, J; Ge, G; Winkler, M; Cong, W; Wang, G

    2014-01-01

    Purpose: Radiation dose reduction has been a long standing challenge in CT imaging of obese patients. Recent advances in interior tomography (reconstruction of an interior region of interest (ROI) from line integrals associated with only paths through the ROI) promise to achieve significant radiation dose reduction without compromising image quality. This study is to investigate the application of this technique in CT imaging through evaluating imaging quality reconstructed from patient data. Methods: Projection data were directly obtained from patients who had CT examinations in a Dual Source CT scanner (DSCT). Two detectors in a DSCT acquired projection data simultaneously. One detector provided projection data for full field of view (FOV, 50 cm) while another detectors provided truncated projection data for a FOV of 26 cm. Full FOV CT images were reconstructed using both filtered back projection and iterative algorithm; while interior tomography algorithm was implemented to reconstruct ROI images. For comparison reason, FBP was also used to reconstruct ROI images. Reconstructed CT images were evaluated by radiologists and compared with images from CT scanner. Results: The results show that the reconstructed ROI image was in excellent agreement with the truth inside the ROI, obtained from images from CT scanner, and the detailed features in the ROI were quantitatively accurate. Radiologists evaluation shows that CT images reconstructed with interior tomography met diagnosis requirements. Radiation dose may be reduced up to 50% using interior tomography, depending on patient size. Conclusion: This study shows that interior tomography can be readily employed in CT imaging for radiation dose reduction. It may be especially useful in imaging obese patients, whose subcutaneous tissue is less clinically relevant but may significantly increase radiation dose

  3. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    Science.gov (United States)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  4. Arc-Search Infeasible Interior-Point Algorithm for Linear Programming

    OpenAIRE

    Yang, Yaguang

    2014-01-01

    Mehrotra's algorithm has been the most successful infeasible interior-point algorithm for linear programming since 1990. Most popular interior-point software packages for linear programming are based on Mehrotra's algorithm. This paper proposes an alternative algorithm, arc-search infeasible interior-point algorithm. We will demonstrate, by testing Netlib problems and comparing the test results obtained by arc-search infeasible interior-point algorithm and Mehrotra's algorithm, that the propo...

  5. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  6. Scout-view assisted interior micro-CT

    International Nuclear Information System (INIS)

    Sharma, Kriti Sen; Narayanan, Shree; Agah, Masoud; Holzner, Christian; Vasilescu, Dragoş M; Jin, Xin; Hoffman, Eric A; Yu, Hengyong; Wang, Ge

    2013-01-01

    Micro computed tomography (micro-CT) is a widely-used imaging technique. A challenge of micro-CT is to quantitatively reconstruct a sample larger than the field-of-view (FOV) of the detector. This scenario is characterized by truncated projections and associated image artifacts. However, for such truncated scans, a low resolution scout scan with an increased FOV is frequently acquired so as to position the sample properly. This study shows that the otherwise discarded scout scans can provide sufficient additional information to uniquely and stably reconstruct the interior region of interest. Two interior reconstruction methods are designed to utilize the multi-resolution data without significant computational overhead. While most previous studies used numerically truncated global projections as interior data, this study uses truly hybrid scans where global and interior scans were carried out at different resolutions. Additionally, owing to the lack of standard interior micro-CT phantoms, we designed and fabricated novel interior micro-CT phantoms for this study to provide means of validation for our algorithms. Finally, two characteristic samples from separate studies were scanned to show the effect of our reconstructions. The presented methods show significant improvements over existing reconstruction algorithms. (paper)

  7. Interior point algorithms theory and analysis

    CERN Document Server

    Ye, Yinyu

    2011-01-01

    The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

  8. The Homogeneous Interior-Point Algorithm: Nonsymmetric Cones, Warmstarting, and Applications

    DEFF Research Database (Denmark)

    Skajaa, Anders

    algorithms for these problems is still limited. The goal of this thesis is to investigate and shed light on two computational aspects of homogeneous interior-point algorithms for convex conic optimization: The first part studies the possibility of devising a homogeneous interior-point method aimed at solving...... problems involving constraints that require nonsymmetric cones in their formulation. The second part studies the possibility of warmstarting the homogeneous interior-point algorithm for conic problems. The main outcome of the first part is the introduction of a completely new homogeneous interior......-point algorithm designed to solve nonsymmetric convex conic optimization problems. The algorithm is presented in detail and then analyzed. We prove its convergence and complexity. From a theoretical viewpoint, it is fully competitive with other algorithms and from a practical viewpoint, we show that it holds lots...

  9. Optimal Power Flow by Interior Point and Non Interior Point Modern Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Marcin Połomski

    2013-03-01

    Full Text Available The idea of optimal power flow (OPF is to determine the optimal settings for control variables while respecting various constraints, and in general it is related to power system operational and planning optimization problems. A vast number of optimization methods have been applied to solve the OPF problem, but their performance is highly dependent on the size of a power system being optimized. The development of the OPF recently has tracked significant progress both in numerical optimization techniques and computer techniques application. In recent years, application of interior point methods to solve OPF problem has been paid great attention. This is due to the fact that IP methods are among the fastest algorithms, well suited to solve large-scale nonlinear optimization problems. This paper presents the primal-dual interior point method based optimal power flow algorithm and new variant of the non interior point method algorithm with application to optimal power flow problem. Described algorithms were implemented in custom software. The experiments show the usefulness of computational software and implemented algorithms for solving the optimal power flow problem, including the system model sizes comparable to the size of the National Power System.

  10. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    International Nuclear Information System (INIS)

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  11. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  12. Meaning of Interior Tomography

    Science.gov (United States)

    Wang, Ge; Yu, Hengyong

    2013-01-01

    The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as “omni-tomography” is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features. PMID:23912256

  13. Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion

    International Nuclear Information System (INIS)

    Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh; Tsui, Benjamin M. W.; Cammin, Jochen; Tang Qiulin

    2011-01-01

    Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing based interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of ∼500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that

  14. Exact iterative reconstruction for the interior problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Gullberg, Grant T

    2009-01-01

    There is a trend in single photon emission computed tomography (SPECT) that small and dedicated imaging systems are becoming popular. For example, many companies are developing small dedicated cardiac SPECT systems with different designs. These dedicated systems have a smaller field of view (FOV) than a full-size clinical system. Thus data truncation has become the norm rather than the exception in these systems. Therefore, it is important to develop region of interest (ROI) reconstruction algorithms using truncated data. This paper is a stepping stone toward this direction. This paper shows that the common generic iterative image reconstruction algorithms are able to exactly reconstruct the ROI under the conditions that the convex ROI is fully sampled and the image value in a sub-region within the ROI is known. If the ROI includes a sub-region that is outside the patient body, then the conditions can be easily satisfied.

  15. Algorithms for reconstructing images for industrial applications

    International Nuclear Information System (INIS)

    Lopes, R.T.; Crispim, V.R.

    1986-01-01

    Several algorithms for reconstructing objects from their projections are being studied in our Laboratory, for industrial applications. Such algorithms are useful locating the position and shape of different composition of materials in the object. A Comparative study of two algorithms is made. The two investigated algorithsm are: The MART (Multiplicative - Algebraic Reconstruction Technique) and the Convolution Method. The comparison are carried out from the point view of the quality of the image reconstructed, number of views and cost. (Author) [pt

  16. Impact of Reconstruction Algorithms on CT Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm Variability.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo

    2016-01-01

    To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (palgorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (palgorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.

  17. UV Reconstruction Algorithm And Diurnal Cycle Variability

    Science.gov (United States)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  18. Wind reconstruction algorithm for Viking Lander 1

    Science.gov (United States)

    Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter

    2017-06-01

    The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  19. Wind reconstruction algorithm for Viking Lander 1

    Directory of Open Access Journals (Sweden)

    T. Kynkäänniemi

    2017-06-01

    Full Text Available The wind measurement sensors of Viking Lander 1 (VL1 were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  20. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2 −L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  1. Selected event reconstruction algorithms for the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Lebedev, Semen; Höhne, Claudia; Lebedev, Andrey; Ososkov, Gennady

    2014-01-01

    Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms are presented. Optimization of the algorithms in the following CBM detectors are discussed: Ring Imaging Cherenkov (RICH) detector, Transition Radiation Detectors (TRD) and Muon Chamber (MUCH). The ring reconstruction algorithm in the RICH is discussed. In TRD and MUCH track reconstruction algorithms are based on track following and Kalman Filter methods. All algorithms were significantly optimized to achieve maximum speed up and minimum memory consumption. Obtained results showed that a significant speed up factor for all algorithms was achieved and the reconstruction efficiency stays at high level.

  2. Multichannel algorithm for fast 3D reconstruction

    International Nuclear Information System (INIS)

    Rodet, Thomas; Grangeat, Pierre; Desbat, Laurent

    2002-01-01

    Some recent medical imaging applications such as functional imaging (PET and SPECT) or interventional imaging (CT fluoroscopy) involve increasing amounts of data. In order to reduce the image reconstruction time, we develop a new fast 3D reconstruction algorithm based on a divide and conquer approach. The proposed multichannel algorithm performs an indirect frequential subband decomposition of the image f to be reconstructed (f=Σf j ) through the filtering of the projections Rf. The subband images f j are reconstructed on a downsampled grid without information suppression. In order to reduce the computation time, we do not backproject the null filtered projections and we downsample the number of projections according to the Shannon conditions associated with the subband image. Our algorithm is based on filtering and backprojection operators. Using the same algorithms for these basic operators, our approach is three and a half times faster than a classical FBP algorithm for a 2D image 512x512 and six times faster for a 3D image 32x512x512. (author)

  3. A trust region interior point algorithm for optimal power flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  4. Advanced reconstruction algorithms for electron tomography: From comparison to combination

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2013-04-15

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.

  5. Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules

    Directory of Open Access Journals (Sweden)

    Peng He

    2013-01-01

    Full Text Available Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI from truncated data in a theoretically exact fashion via the total variation (TV minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen.

  6. A Primal-Dual Interior Point-Linear Programming Algorithm for MPC

    DEFF Research Database (Denmark)

    Edlund, Kristian; Sokoler, Leo Emil; Jørgensen, John Bagterp

    2009-01-01

    Constrained optimal control problems for linear systems with linear constraints and an objective function consisting of linear and l1-norm terms can be expressed as linear programs. We develop an efficient primal-dual interior point algorithm for solution of such linear programs. The algorithm...

  7. Scout-view assisted interior digital tomosynthesis (iDTS) based on compressed-sensing theory

    Science.gov (United States)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Seo, C. W.; Je, U. K.; Park, C. K.; Lim, H. W.; Kim, K. S.; Lee, D. Y.; Lee, H. W.; Kang, S. Y.; Park, J. E.; Woo, T. H.; Lee, M. S.

    2017-12-01

    Conventional digital tomosynthesis (DTS) based on the filtered-backprojection (FBP) reconstruction requires full field-of-view scan and also relatively dense projections, which results in still high dose for medical imaging purposes. In this work, to overcome these difficulties, we propose a new type of DTS examinations, the so-called scout-view assisted interior DTS (iDTS), in which the x-ray beam span covers only a small region-of-interest (ROI) containing target diagnosis with the help of some scout views and they are used in the reconstruction to add additional information to interior ROI otherwise absent with conventional iDTS reconstruction methods. We considered an effective iterative algorithm based on compressed-sensing theory, rather than the FBP-based algorithm, for more accurate iDTS reconstruction. We implemented the proposed algorithm, performed a systematic simulation and experiment, and investigated the image characteristics. We successfully reconstructed iDTS images of substantially high accuracy and no truncation artifacts by using the proposed method, preserving superior image homogeneity, edge sharpening, and in-plane spatial resolution.

  8. A combinational fast algorithm for image reconstruction

    International Nuclear Information System (INIS)

    Wu Zhongquan

    1987-01-01

    A combinational fast algorithm has been developed in order to increase the speed of reconstruction. First, an interpolation method based on B-spline functions is used in image reconstruction. Next, the influence of the boundary conditions assumed here on the interpolation of filtered projections and on the image reconstruction is discussed. It is shown that this boundary condition has almost no influence on the image in the central region of the image space, because the error of interpolation rapidly decreases by a factor of ten in shifting two pixels from the edge toward the center. In addition, a fast algorithm for computing the detecting angle has been used with the mentioned interpolation algorithm, and the cost for detecting angle computaton is reduced by a factor of two. The implementation results show that in the same subjective and objective fidelity, the computational cost for the interpolation using this algorithm is about one-twelfth of the conventional algorithm

  9. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  10. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  11. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  12. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  13. Low dose reconstruction algorithm for differential phase contrast imaging.

    Science.gov (United States)

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  14. Duality reconstruction algorithm for use in electrical impedance tomography

    International Nuclear Information System (INIS)

    Abdullah, M.Z.; Dickin, F.J.

    1996-01-01

    A duality reconstruction algorithm for solving the inverse problem in electrical impedance tomography (EIT) is described. In this method, an algorithm based on the Geselowitz compensation (GC) theorem is used first to reconstruct an approximate version of the image. It is then fed as a first guessed data to the modified Newton-Raphson (MNR) algorithm which iteratively correct the image until a final acceptable solution is reached. The implementation of the GC and MNR based algorithms using the finite element method will be discussed. Reconstructed images produced by the algorithm will also be presented. Consideration is also given to the most computationally intensive aspects of the algorithm, namely the inversion of the large and sparse matrices. The methods taken to approximately compute the inverse ot those matrices will be outlined. (author)

  15. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  16. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  17. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  18. DART: a practical reconstruction algorithm for discrete tomography.

    Science.gov (United States)

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  19. Image-reconstruction algorithms for positron-emission tomography systems

    International Nuclear Information System (INIS)

    Cheng, S.N.C.

    1982-01-01

    The positional uncertainty in the time-of-flight measurement of a positron-emission tomography system is modelled as a Gaussian distributed random variable and the image is assumed to be piecewise constant on a rectilinear lattice. A reconstruction algorithm using maximum-likelihood estimation is derived for the situation in which time-of-flight data are sorted as the most-likely-position array. The algorithm is formulated as a linear system described by a nonseparable, block-banded, Toeplitz matrix, and a sine-transform technique is used to implement this algorithm efficiently. The reconstruction algorithms for both the most-likely-position array and the confidence-weighted array are described by similar equations, hence similar linear systems can be used to described the reconstruction algorithm for a discrete, confidence-weighted array, when the matrix and the entries in the data array are properly identified. It is found that the mean square-error depends on the ratio of the full width at half the maximum of time-of-flight measurement over the size of a pixel. When other parameters are fixed, the larger the pixel size, the smaller is the mean square-error. In the study of resolution, parameters that affect the impulse response of time-of-flight reconstruction algorithms are identified. It is found that the larger the pixel size, the larger is the standard deviation of the impulse response. This shows that small mean square-error and fine resolution are two contradictory requirements

  20. A filtered backprojection reconstruction algorithm for Compton camera

    Energy Technology Data Exchange (ETDEWEB)

    Lojacono, Xavier; Maxim, Voichita; Peyrin, Francoise; Prost, Remy [Lyon Univ., Villeurbanne (France). CNRS, Inserm, INSA-Lyon, CREATIS, UMR5220; Zoglauer, Andreas [California Univ., Berkeley, CA (United States). Space Sciences Lab.

    2011-07-01

    In this paper we present a filtered backprojection reconstruction algorithm for Compton Camera detectors of particles. Compared to iterative methods, widely used for the reconstruction of images from Compton camera data, analytical methods are fast, easy to implement and avoid convergence issues. The method we propose is exact for an idealized Compton camera composed of two parallel plates of infinite dimension. We show that it copes well with low number of detected photons simulated from a realistic device. Images reconstructed from both synthetic data and realistic ones obtained with Monte Carlo simulations demonstrate the efficiency of the algorithm. (orig.)

  1. A reconstruction algorithms for helical cone-beam SPECT

    International Nuclear Information System (INIS)

    Weng, Y.; Zeng, G.L.; Gullberg, G.T.

    1993-01-01

    Cone-beam SPECT provides improved sensitivity for imaging small organs like the brain and heart. However, current cone-beam tomography with the focal point traversing a planar orbit does not acquire sufficient data to give an accurate reconstruction. In this paper, the authors employ a data-acquisition method which obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix surrounding the patient. An implementation of Grangeat's algorithm for helical cone-beam projections is developed. The algorithm requires a rebinning step to convert cone-beam data to parallel-beam data which are then reconstructed using the 3D Radon inversion. A fast new rebinning scheme is developed which uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. This algorithm is shown to produce less artifacts than the commonly used Feldkamp algorithm when applied to either a circular planar orbit or a helical orbit acquisition. The algorithm can easily be extended to any arbitrary orbit

  2. Neural network algorithm for image reconstruction using the grid friendly projections

    International Nuclear Information System (INIS)

    Cierniak, R.

    2011-01-01

    Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)

  3. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  4. Cone-beam and fan-beam image reconstruction algorithms based on spherical and circular harmonics

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Gullberg, Grant T

    2004-01-01

    A cone-beam image reconstruction algorithm using spherical harmonic expansions is proposed. The reconstruction algorithm is in the form of a summation of inner products of two discrete arrays of spherical harmonic expansion coefficients at each cone-beam point of acquisition. This form is different from the common filtered backprojection algorithm and the direct Fourier reconstruction algorithm. There is no re-sampling of the data, and spherical harmonic expansions are used instead of Fourier expansions. As a special case, a new fan-beam image reconstruction algorithm is also derived in terms of a circular harmonic expansion. Computer simulation results for both cone-beam and fan-beam algorithms are presented for circular planar orbit acquisitions. The algorithms give accurate reconstructions; however, the implementation of the cone-beam reconstruction algorithm is computationally intensive. A relatively efficient algorithm is proposed for reconstructing the central slice of the image when a circular scanning orbit is used

  5. A Superresolution Image Reconstruction Algorithm Based on Landweber in Electrical Capacitance Tomography

    Directory of Open Access Journals (Sweden)

    Chen Deyun

    2013-01-01

    Full Text Available According to the image reconstruction accuracy influenced by the “soft field” nature and ill-conditioned problems in electrical capacitance tomography, a superresolution image reconstruction algorithm based on Landweber is proposed in the paper, which is based on the working principle of the electrical capacitance tomography system. The method uses the algorithm which is derived by regularization of solutions derived and derives closed solution by fast Fourier transform of the convolution kernel. So, it ensures the certainty of the solution and improves the stability and quality of image reconstruction results. Simulation results show that the imaging precision and real-time imaging of the algorithm are better than Landweber algorithm, and this algorithm proposes a new method for the electrical capacitance tomography image reconstruction algorithm.

  6. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  7. Evaluation of reconstruction algorithms in SPECT neuroimaging: Pt. 1

    International Nuclear Information System (INIS)

    Heejoung Kim; Zeeberg, B.R.; Reba, R.C.

    1993-01-01

    In the presence of statistical noise, an iterative reconstruction algorithm (IRA) for the quantitative reconstruction of single-photon-emission computed tomographic (SPECT) brain images overcomes major limitations of applying the standard filtered back projection (FBP) reconstruction algorithm to projection data which have been degraded by convolution of the true radioactivity distribution with a finite-resolution distance-dependent detector response: (a) the non-uniformity within the grey (or white) matter voxels which results even though the true model is uniform within these voxels; (b) a significantly lower ratio of grey/white matter voxel values than in the true model; and (c) an inability to detect an altered radioactivity value within the grey (or white) matter voxels. It is normally expected that an algorithm which improves spatial resolution and quantitative accuracy might also increase the magnitude of the statistical noise in the reconstructed image. However, the noise properties in the IRA images are very similar to those in the FBP images. (Author)

  8. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  9. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  10. First results of genetic algorithm application in ML image reconstruction in emission tomography

    International Nuclear Information System (INIS)

    Smolik, W.

    1999-01-01

    This paper concerns application of genetic algorithm in maximum likelihood image reconstruction in emission tomography. The example of genetic algorithm for image reconstruction is presented. The genetic algorithm was based on the typical genetic scheme modified due to the nature of solved problem. The convergence of algorithm was examined. The different adaption functions, selection and crossover methods were verified. The algorithm was tested on simulated SPECT data. The obtained results of image reconstruction are discussed. (author)

  11. Quantitative tomography simulations and reconstruction algorithms

    International Nuclear Information System (INIS)

    Martz, H.E.; Aufderheide, M.B.; Goodman, D.; Schach von Wittenau, A.; Logan, C.; Hall, J.; Jackson, J.; Slone, D.

    2000-01-01

    X-ray, neutron and proton transmission radiography and computed tomography (CT) are important diagnostic tools that are at the heart of LLNL's effort to meet the goals of the DOE's Advanced Radiography Campaign. This campaign seeks to improve radiographic simulation and analysis so that radiography can be a useful quantitative diagnostic tool for stockpile stewardship. Current radiographic accuracy does not allow satisfactory separation of experimental effects from the true features of an object's tomographically reconstructed image. This can lead to difficult and sometimes incorrect interpretation of the results. By improving our ability to simulate the whole radiographic and CT system, it will be possible to examine the contribution of system components to various experimental effects, with the goal of removing or reducing them. In this project, we are merging this simulation capability with a maximum-likelihood (constrained-conjugate-gradient-CCG) reconstruction technique yielding a physics-based, forward-model image-reconstruction code. In addition, we seek to improve the accuracy of computed tomography from transmission radiographs by studying what physics is needed in the forward model. During FY 2000, an improved version of the LLNL ray-tracing code called HADES has been coupled with a recently developed LLNL CT algorithm known as CCG. The problem of image reconstruction is expressed as a large matrix equation relating a model for the object being reconstructed to its projections (radiographs). Using a constrained-conjugate-gradient search algorithm, a maximum likelihood solution is sought. This search continues until the difference between the input measured radiographs or projections and the simulated or calculated projections is satisfactorily small

  12. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  13. Performances of new reconstruction algorithms for CT-TDLAS (computer tomography-tunable diode laser absorption spectroscopy)

    International Nuclear Information System (INIS)

    Jeon, Min-Gyu; Deguchi, Yoshihiro; Kamimoto, Takahiro; Doh, Deog-Hee; Cho, Gyeong-Rae

    2017-01-01

    Highlights: • The measured data were successfully used for generating absorption spectra. • Four different reconstruction algorithms, ART, MART, SART and SMART were evaluated. • The calculation speed of convergence by the SMART algorithm was the fastest. • SMART was the most reliable algorithm for reconstructing the multiple signals. - Abstract: Recent advent of the tunable lasers made to measure simultaneous temperature and concentration fields of the gases. CT-TDLAS (computed tomography-tunable diode laser absorption spectroscopy) is one the leading techniques for the measurements of temperature and concentration fields of the gases. In CT-TDLAS, the accuracies of the measurement results are strongly dependent upon the reconstruction algorithms. In this study, four different reconstruction algorithms have been tested numerically using experimental data sets measured by thermocouples for combustion fields. Three reconstruction algorithms, MART (multiplicative algebraic reconstruction technique) algorithm, SART (simultaneous algebraic reconstruction technique) algorithm and SMART (simultaneous multiplicative algebraic reconstruction technique) algorithm, are newly proposed for CT-TDLAS in this study. The calculation results obtained by the three algorithms have been compared with previous algorithm, ART (algebraic reconstruction technique) algorithm. Phantom data sets have been generated by the use of thermocouples data obtained in an actual experiment. The data of the Harvard HITRAN table in which the thermo-dynamical properties and the light spectrum of the H_2O are listed were used for the numerical test. The reconstructed temperature and concentration fields were compared with the original HITRAN data, through which the constructed methods are validated. The performances of the four reconstruction algorithms were demonstrated. This method is expected to enhance the practicality of CT-TDLAS.

  14. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  15. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  16. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  17. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    International Nuclear Information System (INIS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang- Hee

    2014-01-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time. - Highlights: • The PDS-OSEM reconstructs PET images with iteratively compensating random and scatter corrections from prompt sinogram. • The PDS-OSEM can reconstruct PET images with low count data and data contaminations. • The PDS-OSEM provides less noise and higher quality of reconstructed images than those of OP-OSEM algorithm in statistical sense

  18. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  19. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  20. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  1. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  2. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    Science.gov (United States)

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly

  3. Search for 'Little Higgs' and reconstruction algorithms developments in Atlas

    International Nuclear Information System (INIS)

    Rousseau, D.

    2007-05-01

    This document summarizes developments of framework and reconstruction algorithms for the ATLAS detector at the LHC. A library of reconstruction algorithms has been developed in a more and more complex environment. The reconstruction software originally designed on an optimistic Monte-Carlo simulation, has been confronted with a more detailed 'as-built' simulation. The 'Little Higgs' is an effective theory which can be taken for granted, or as an opportunity to study heavy resonances. In several cases, these resonances can be detected in original channels like tZ, ZH or WH. (author)

  4. A superlinear interior points algorithm for engineering design optimization

    Science.gov (United States)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  5. Research on Image Reconstruction Algorithms for Tuber Electrical Resistance Tomography System

    Directory of Open Access Journals (Sweden)

    Jiang Zili

    2016-01-01

    Full Text Available The application of electrical resistance tomography (ERT technology has been expanded to the field of agriculture, and the concept of TERT (Tuber Electrical Resistance Tomography is proposed. On the basis of the research on the forward and the inverse problems of the TERT system, a hybrid algorithm based on genetic algorithm is proposed, which can be used in TERT system to monitor the growth status of the plant tubers. The image reconstruction of TERT system is different from the conventional ERT system for two phase-flow measurement. Imaging of TERT needs more precision measurement and the conventional ERT cares more about the image reconstruction speed. A variety of algorithms are analyzed and optimized for the purpose of making them suitable for TERT system. For example: linear back projection, modified Newton-Raphson and genetic algorithm. Experimental results showed that the novel hybrid algorithm is superior to other algorithm and it can effectively improve the image reconstruction quality.

  6. On-line reconstruction algorithms for the CBM and ALICE experiments

    International Nuclear Information System (INIS)

    Gorbunov, Sergey

    2013-01-01

    This thesis presents various algorithms which have been developed for on-line event reconstruction in the CBM experiment at GSI, Darmstadt and the ALICE experiment at CERN, Geneve. Despite the fact that the experiments are different - CBM is a fixed target experiment with forward geometry, while ALICE has a typical collider geometry - they share common aspects when reconstruction is concerned. The thesis describes: - general modifications to the Kalman filter method, which allows one to accelerate, to improve, and to simplify existing fit algorithms; - developed algorithms for track fit in CBM and ALICE experiment, including a new method for track extrapolation in non-homogeneous magnetic field. - developed algorithms for primary and secondary vertex fit in the both experiments. In particular, a new method of reconstruction of decayed particles is presented. - developed parallel algorithm for the on-line tracking in the CBM experiment. - developed parallel algorithm for the on-line tracking in High Level Trigger of the ALICE experiment. - the realisation of the track finders on modern hardware, such as SIMD CPU registers and GPU accelerators. All the presented methods have been developed by or with the direct participation of the author.

  7. A combination-weighted Feldkamp-based reconstruction algorithm for cone-beam CT

    International Nuclear Information System (INIS)

    Mori, Shinichiro; Endo, Masahiro; Komatsu, Shuhei; Kandatsu, Susumu; Yashiro, Tomoyasu; Baba, Masayuki

    2006-01-01

    The combination-weighted Feldkamp algorithm (CW-FDK) was developed and tested in a phantom in order to reduce cone-beam artefacts and enhance cranio-caudal reconstruction coverage in an attempt to improve image quality when utilizing cone-beam computed tomography (CBCT). Using a 256-slice cone-beam CT (256CBCT), image quality (CT-number uniformity and geometrical accuracy) was quantitatively evaluated in phantom and clinical studies, and the results were compared to those obtained with the original Feldkamp algorithm. A clinical study was done in lung cancer patients under breath holding and free breathing. Image quality for the original Feldkamp algorithm is degraded at the edge of the scan region due to the missing volume, commensurate with the cranio-caudal distance between the reconstruction and central planes. The CW-FDK extended the reconstruction coverage to equal the scan coverage and improved reconstruction accuracy, unaffected by the cranio-caudal distance. The extended reconstruction coverage with good image quality provided by the CW-FDK will be clinically investigated for improving diagnostic and radiotherapy applications. In addition, this algorithm can also be adapted for use in relatively wide cone-angle CBCT such as with a flat-panel detector CBCT

  8. Beard reconstruction: A surgical algorithm.

    Science.gov (United States)

    Ninkovic, M; Heidekrueger, P I; Ehrl, D; von Spiegel, F; Broer, P N

    2016-06-01

    Facial defects with loss of hair-bearing regions can be caused by trauma, infection, tumor excision, or burn injury. The presented analysis evaluates a series of different surgical approaches with a focus on male beard reconstruction, emphasizing the role of tissue expansion of regional and free flaps. Locoregional and free flap reconstructions were performed in 11 male patients with 14 facial defects affecting the hair-bearing bucco-mandibular or perioral region. In order to minimize donor-site morbidity and obtain large amounts of thin, pliable, hair-bearing tissue, pre-expansion was performed in five of 14 patients. Eight of 14 patients were treated with locoregional flap reconstructions and six with free flap reconstructions. Algorithms regarding pre- and intraoperative decision making are discussed and long-term (mean follow-up 1.5 years) results analyzed. Major complications, including tissue expander infection with the need for removal or exchange, partial or full flap loss, occurred in 0% (0/8) of patients with locoregional flaps and in 17% (1/6) of patients undergoing free flap reconstructions. Secondary refinement surgery was performed in 25% (2/8) of locoregional flaps and in 67% (4/6) of free flaps. Both locoregional and distant tissue transfers play a role in beard reconstruction, while pre-expansion remains an invaluable tool. Paying attention to the presented principles and considering the significance of aesthetic facial subunits, range of motion, aesthetics, and patient satisfaction were improved long term in all our patients while minimizing donor-site morbidity. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin; Guermond, Jean-Luc; Popov, Bojan

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced

  10. A subzone reconstruction algorithm for efficient staggered compatible remapping

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, D.P., E-mail: starinshak1@llnl.gov; Owen, J.M., E-mail: mikeowen@llnl.gov

    2015-09-01

    Staggered-grid Lagrangian hydrodynamics algorithms frequently make use of subzonal discretization of state variables for the purposes of improved numerical accuracy, generality to unstructured meshes, and exact conservation of mass, momentum, and energy. For Arbitrary Lagrangian–Eulerian (ALE) methods using a geometric overlay, it is difficult to remap subzonal variables in an accurate and efficient manner due to the number of subzone–subzone intersections that must be computed. This becomes prohibitive in the case of 3D, unstructured, polyhedral meshes. A new procedure is outlined in this paper to avoid direct subzonal remapping. The new algorithm reconstructs the spatial profile of a subzonal variable using remapped zonal and nodal representations of the data. The reconstruction procedure is cast as an under-constrained optimization problem. Enforcing conservation at each zone and node on the remapped mesh provides the set of equality constraints; the objective function corresponds to a quadratic variation per subzone between the values to be reconstructed and a set of target reference values. Numerical results for various pure-remapping and hydrodynamics tests are provided. Ideas for extending the algorithm to staggered-grid radiation-hydrodynamics are discussed as well as ideas for generalizing the algorithm to include inequality constraints.

  11. A three-dimensional reconstruction algorithm for an inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Fahrig, Rebecca; Pelc, Norbert J.

    2005-01-01

    An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system

  12. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    International Nuclear Information System (INIS)

    Koyama, Hisanobu; Ohno, Yoshiharu; Yamazaki, Youichi; Nogami, Munenobu; Kusaka, Akiko; Murase, Kenya; Sugimura, Kazuro

    2010-01-01

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  13. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Hisanobu [Department of Radiology, Hyogo Kaibara Hospital, 5208-1 Kaibara, Kaibara-cho, Tanba 669-3395 (Japan)], E-mail: hisanobu19760104@yahoo.co.jp; Ohno, Yoshiharu [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: yosirad@kobe-u.ac.jp; Yamazaki, Youichi [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: y.yamazk@sahs.med.osaka-u.ac.jp; Nogami, Munenobu [Division of PET, Institute of Biomedical Research and Innovation, 2-2 MInamimachi, Minatojima, Chu0-ku, Kobe 650-0047 (Japan)], E-mail: aznogami@fbri.org; Kusaka, Akiko [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: a.kusaka@hosp.kobe-u.ac.jp; Murase, Kenya [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: murase@sahs.med.osaka-u.ac.jp; Sugimura, Kazuro [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: sugimura@med.kobe-u.ac.jp

    2010-04-15

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  14. Performance of the ATLAS primary vertex reconstruction algorithms

    CERN Document Server

    Zhang, Matt

    2017-01-01

    The reconstruction of primary vertices in the busy, high pile up environment of the LHC is a challenging task. The challenges and novel methods developed by the ATLAS experiment to reconstruct vertices in such environments will be presented. Such advances in vertex seeding include methods taken from medical imagining, which allow for reconstruction of very nearby vertices will be highlighted. The performance of the current vertexing algorithms using early Run-2 data will be presented and compared to results from simulation.

  15. On Implementing a Homogeneous Interior-Point Algorithm for Nonsymmetric Conic Optimization

    DEFF Research Database (Denmark)

    Skajaa, Anders; Jørgensen, John Bagterp; Hansen, Per Christian

    Based on earlier work by Nesterov, an implementation of a homogeneous infeasible-start interior-point algorithm for solving nonsymmetric conic optimization problems is presented. Starting each iteration from (the vicinity of) the central path, the method computes (nearly) primal-dual symmetric...... approximate tangent directions followed by a purely primal centering procedure to locate the next central primal-dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed...

  16. A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.

    1994-01-01

    A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented

  17. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    Science.gov (United States)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  18. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  19. Fast half-sibling population reconstruction: theory and algorithms.

    Science.gov (United States)

    Dexter, Daniel; Brown, Daniel G

    2013-07-12

    Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.

  20. Well-posedness of the conductivity reconstruction from an interior current density in terms of Schauder theory

    KAUST Repository

    Kim, Yong-Jung

    2015-06-23

    We show the well-posedness of the conductivity image reconstruction problem with a single set of interior electrical current data and boundary conductivity data. Isotropic conductivity is considered in two space dimensions. Uniqueness for similar conductivity reconstruction problems has been known for several cases. However, the existence and the stability are obtained in this paper for the first time. The main tool of the proof is the method of characteristics of a related curl equation.

  1. Well-posedness of the conductivity reconstruction from an interior current density in terms of Schauder theory

    KAUST Repository

    Kim, Yong-Jung; Lee, Min-Gi

    2015-01-01

    We show the well-posedness of the conductivity image reconstruction problem with a single set of interior electrical current data and boundary conductivity data. Isotropic conductivity is considered in two space dimensions. Uniqueness for similar conductivity reconstruction problems has been known for several cases. However, the existence and the stability are obtained in this paper for the first time. The main tool of the proof is the method of characteristics of a related curl equation.

  2. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  3. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  4. Noise propagation in iterative reconstruction algorithms with line searches

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced

  5. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  6. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa

    2005-01-01

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  7. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  8. Comparison of different reconstruction algorithms for three-dimensional ultrasound imaging in a neurosurgical setting.

    Science.gov (United States)

    Miller, D; Lippert, C; Vollmer, F; Bozinov, O; Benes, L; Schulte, D M; Sure, U

    2012-09-01

    Freehand three-dimensional ultrasound imaging (3D-US) is increasingly used in image-guided surgery. During image acquisition, a set of B-scans is acquired that is distributed in a non-parallel manner over the area of interest. Reconstructing these images into a regular array allows 3D visualization. However, the reconstruction process may introduce artefacts and may therefore reduce image quality. The aim of the study is to compare different algorithms with respect to image quality and diagnostic value for image guidance in neurosurgery. 3D-US data sets were acquired during surgery of various intracerebral lesions using an integrated ultrasound-navigation device. They were stored for post-hoc evaluation. Five different reconstruction algorithms, a standard multiplanar reconstruction with interpolation (MPR), a pixel nearest neighbour method (PNN), a voxel nearest neighbour method (VNN) and two voxel based distance-weighted algorithms (VNN2 and DW) were tested with respect to image quality and artefact formation. The capability of the algorithm to fill gaps within the sample volume was investigated and a clinical evaluation with respect to the diagnostic value of the reconstructed images was performed. MPR was significantly worse than the other algorithms in filling gaps. In an image subtraction test, VNN2 and DW reliably reconstructed images even if large amounts of data were missing. However, the quality of the reconstruction improved, if data acquisition was performed in a structured manner. When evaluating the diagnostic value of reconstructed axial, sagittal and coronal views, VNN2 and DW were judged to be significantly better than MPR and VNN. VNN2 and DW could be identified as robust algorithms that generate reconstructed US images with a high diagnostic value. These algorithms improve the utility and reliability of 3D-US imaging during intraoperative navigation. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Backtracking algorithm for lepton reconstruction with HADES

    International Nuclear Information System (INIS)

    Sellheim, P

    2015-01-01

    The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e +/- identification. Efficiency and purity of a reconstructed e +/- sample will be discussed as well. (paper)

  10. FPGA Hardware Acceleration of a Phylogenetic Tree Reconstruction with Maximum Parsimony Algorithm

    OpenAIRE

    BLOCK, Henry; MARUYAMA, Tsutomu

    2017-01-01

    In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel an...

  11. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    Science.gov (United States)

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  12. Computed laminography and reconstruction algorithm

    International Nuclear Information System (INIS)

    Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)

  13. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    Science.gov (United States)

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  14. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    Science.gov (United States)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  15. Algorithm of hadron energy reconstruction for combined calorimeters in the DELPHI detector

    International Nuclear Information System (INIS)

    Gotra, Yu.N.; Tsyganov, E.N.; Zimin, N.I.; Zinchenko, A.I.

    1989-01-01

    The algorithm of hadron energy reconstruction from responses of electromagnetic and hadron calorimeters is described. The investigations have been carried out using the full-scale prototype of the hadron calorimeter cylindrical part modules. The supposed algorithm allows one to improve energy resolution by 5-7% with conserving the linearly of reconstructed hadron energy. 5 refs.; 4 figs.; 1 tab

  16. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  17. Statistical image reconstruction for transmission tomography using relaxed ordered subset algorithms

    International Nuclear Information System (INIS)

    Kole, J S

    2005-01-01

    Statistical reconstruction methods offer possibilities for improving image quality as compared to analytical methods, but current reconstruction times prohibit routine clinical applications in x-ray computed tomography (CT). To reduce reconstruction times, we have applied (under) relaxation to ordered subset algorithms. This enables us to use subsets consisting of only single projection angle, effectively increasing the number of image updates within an entire iteration. A second advantage of applying relaxation is that it can help improve convergence by removing the limit cycle behaviour of ordered subset algorithms, which normally do not converge to an optimal solution but rather a suboptimal limit cycle consisting of as many points as there are subsets. Relaxation suppresses the limit cycle behaviour by decreasing the stepsize for approaching the solution. A simulation study for a 2D mathematical phantom and three different ordered subset algorithms shows that all three algorithms benefit from relaxation: equal noise-to-resolution trade-off can be achieved using fewer iterations than the conventional algorithms, while a lower minimal normalized mean square error (NMSE) clearly indicates a better convergence. Two different schemes for setting the relaxation parameter are studied, and both schemes yield approximately the same minimal NMSE

  18. Image Reconstruction Algorithm For Electrical Capacitance Tomography (ECT)

    International Nuclear Information System (INIS)

    Arko

    2001-01-01

    ). Most image reconstruction algorithms for electrical capacitance tomography (ECT) use sensitivity maps as weighting factors. The computation is fast, involving a simple multiply-and- accumulate (MAC) operation, but the resulting image suffers from blurring due to the soft-field effect of the sensor. This paper presents a low cost iterative method employing proportional thresholding, which improves image quality dramatically. The strategy for implementation, computational cost, and achievable speed is examined when using a personal computer (PC) and Digital Signal Processor (DSP). For PC implementation, Watcom C++ 10.6 and Visual C++ 5.0 compilers were used. The experimental results are compared to the images reconstructed by commercially available software. The new algorithm improves the image quality significantly at a cost of a few iterations. This technique can be readily exploited for online applications

  19. An ART iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Zhang Li; Huang Zhifeng; Kang Kejun; Chen Zhiqiang; Fang Qiaoguang; Zhu Peiping

    2009-01-01

    X-ray diffraction enhanced imaging (DEI) has extremely high sensitivity for weakly absorbing low-Z samples in medical and biological fields. In this paper, we propose an Algebra Reconstruction Technique (ART) iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging (DEI-CT). An Ordered Subsets (OS) technique is used to accelerate the ART reconstruction. Few-view reconstruction is also studied, and a partial differential equation (PDE) type filter which has the ability of edge-preserving and denoising is used to improve the image quality and eliminate the artifacts. The proposed algorithm is validated with both the numerical simulations and the experiment at the Beijing synchrotron radiation facility (BSRF). (authors)

  20. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  1. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  2. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  3. Evaluation of digital breast tomosynthesis reconstruction algorithms using synchrotron radiation in standard geometry

    International Nuclear Information System (INIS)

    Bliznakova, K.; Kolitsi, Z.; Speller, R. D.; Horrocks, J. A.; Tromba, G.; Pallikarakis, N.

    2010-01-01

    Purpose: In this article, the image quality of reconstructed volumes by four algorithms for digital tomosynthesis, applied in the case of breast, is investigated using synchrotron radiation. Methods: An angular data set of 21 images of a complex phantom with heterogeneous tissue-mimicking background was obtained using the SYRMEP beamline at ELETTRA Synchrotron Light Laboratory, Trieste, Italy. The irradiated part was reconstructed using the multiple projection algorithm (MPA) and the filtered backprojection with ramp followed by hamming windows (FBR-RH) and filtered backprojection with ramp (FBP-R). Additionally, an algorithm for reducing the noise in reconstructed planes based on noise mask subtraction from the planes of the originally reconstructed volume using MPA (MPA-NM) has been further developed. The reconstruction techniques were evaluated in terms of calculations and comparison of the contrast-to-noise ratio (CNR) and artifact spread function. Results: It was found that the MPA-NM resulted in higher CNR, comparable with the CNR of FBP-RH for high contrast details. Low contrast objects are well visualized and characterized by high CNR using the simple MPA and the MPA-NM. In addition, the image quality of the reconstructed features in terms of CNR and visual appearance as a function of the initial number of projection images and the reconstruction arc was carried out. Slices reconstructed with more input projection images result in less reconstruction artifacts and higher detail CNR, while those reconstructed from projection images acquired in reduced angular range causes pronounced streak artifacts. Conclusions: Of the reconstruction algorithms implemented, the MPA-NM and MPA are a good choice for detecting low contrast objects, while the FBP-RH, FBP-R, and MPA-NM provide high CNR and well outlined edges in case of microcalcifications.

  4. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    Science.gov (United States)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  5. TauFinder: A Reconstruction Algorithm for τ Leptons at Linear Colliders

    CERN Document Server

    Muennich, A

    2010-01-01

    An algorithm to find and reconstruct τ leptons was developed, which targets τs that produce high energetic, low multiplicity jets as can be observed at multi TeV e+e− collisions. However, it makes no assumption about the decay of the τ candidate thus finding hadronic as well as leptonic decays. The algorithm delivers a reconstructed τ as seen by the detector. This note provides an overview of the algorithm, the cuts used and gives some evaluation of the performance. A first implementation is available within the ILC software framework as a MAR- LIN processor . Appendix A is intended as a short user manual.

  6. Rapid maximum likelihood ancestral state reconstruction of continuous characters: A rerooting-free algorithm.

    Science.gov (United States)

    Goolsby, Eric W

    2017-04-01

    Ancestral state reconstruction is a method used to study the evolutionary trajectories of quantitative characters on phylogenies. Although efficient methods for univariate ancestral state reconstruction under a Brownian motion model have been described for at least 25 years, to date no generalization has been described to allow more complex evolutionary models, such as multivariate trait evolution, non-Brownian models, missing data, and within-species variation. Furthermore, even for simple univariate Brownian motion models, most phylogenetic comparative R packages compute ancestral states via inefficient tree rerooting and full tree traversals at each tree node, making ancestral state reconstruction extremely time-consuming for large phylogenies. Here, a computationally efficient method for fast maximum likelihood ancestral state reconstruction of continuous characters is described. The algorithm has linear complexity relative to the number of species and outperforms the fastest existing R implementations by several orders of magnitude. The described algorithm is capable of performing ancestral state reconstruction on a 1,000,000-species phylogeny in fewer than 2 s using a standard laptop, whereas the next fastest R implementation would take several days to complete. The method is generalizable to more complex evolutionary models, such as phylogenetic regression, within-species variation, non-Brownian evolutionary models, and multivariate trait evolution. Because this method enables fast repeated computations on phylogenies of virtually any size, implementation of the described algorithm can drastically alleviate the computational burden of many otherwise prohibitively time-consuming tasks requiring reconstruction of ancestral states, such as phylogenetic imputation of missing data, bootstrapping procedures, Expectation-Maximization algorithms, and Bayesian estimation. The described ancestral state reconstruction algorithm is implemented in the Rphylopars

  7. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  8. Microstructure Reconstruction of Sheet Molding Composite Using a Random Chips Packing Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Tianyu; Xu, Hongyi; Chen, Wei

    2017-04-06

    Fiber-reinforced polymer composites are strong candidates for structural materials to replace steel and light alloys in lightweight vehicle design because of their low density and relatively high strength. In the integrated computational materials engineering (ICME) development of carbon fiber composites, microstructure reconstruction algorithms are needed to generate material microstructure representative volume element (RVE) based on the material processing information. The microstructure RVE reconstruction enables the material property prediction by finite element analysis (FEA)This paper presents an algorithm to reconstruct the microstructure of a chopped carbon fiber/epoxy laminate material system produced by compression molding, normally known as sheet molding compounds (SMC). The algorithm takes the result from material’s manufacturing process as inputs, such as the orientation tensor of fibers, the chopped fiber sheet geometry, and the fiber volume fraction. The chopped fiber sheets are treated as deformable rectangle chips and a random packing algorithm is developed to pack these chips into a square plate. The RVE is built in a layer-by-layer fashion until the desired number of lamina is reached, then a fine tuning process is applied to finalize the reconstruction. Compared to the previous methods, this new approach has the ability to model bended fibers by allowing limited amount of overlaps of rectangle chips. Furthermore, the method does not need SMC microstructure images, for which the image-based characterization techniques have not been mature enough, as inputs. Case studies are performed and the results show that the statistics of the reconstructed microstructures generated by the algorithm matches well with the target input parameters from processing.

  9. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  10. Reconstruction of a digital core containing clay minerals based on a clustering algorithm

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K -means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  11. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  12. Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction

    International Nuclear Information System (INIS)

    Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.

    2006-01-01

    Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A β-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction

  13. Optimization of reconstruction algorithms using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by a iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a nonnegativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. 11 refs., 6 figs., 2 tabs

  14. Optimization of reconstruction algorithms using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by an iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a non-negativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. (author)

  15. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  16. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  17. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  18. Convergence of Algorithms for Reconstructing Convex Bodies and Directional Measures

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus; Milanfar, Peyman

    2006-01-01

    We investigate algorithms for reconstructing a convex body K in Rn from noisy measurements of its support function or its brightness function in k directions u1, . . . , uk. The key idea of these algorithms is to construct a convex polytope Pk whose support function (or brightness function) best...

  19. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history

    OpenAIRE

    Cherry, Joshua L.

    2017-01-01

    Background Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Results Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data....

  20. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    International Nuclear Information System (INIS)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  1. Novel image reconstruction algorithm for multi-phase flow tomography system using γ ray method

    International Nuclear Information System (INIS)

    Hao Kuihong; Wang Huaxiang; Gao Mei

    2007-01-01

    After analyzing the reason of image reconstructed algorithm by using the conventional back projection (IBP) is prone to produce spurious line, and considering the characteristic of multi-phase flow tomography, a novel image reconstruction algorithm is proposed, which carries out the intersection calculation using back projection data. This algorithm can obtain a perfect system point spread function, and can eliminate spurious line better. Simulating results show that the algorithm is effective for identifying multi-phase flow pattern. (authors)

  2. A novel standalone track reconstruction algorithm for the LHCb upgrade

    CERN Multimedia

    Quagliani, Renato

    2018-01-01

    During the LHC Run III, starting in 2020, the instantaneous luminosity of LHCb will be increased up to 2×1033 cm−2 s−1, five times larger than in Run II. The LHCb detector will then have to be upgraded in 2019. In fact, a full software event reconstruction will be performed at the full bunch crossing rate by the trigger, in order to profit of the higher instantaneous luminosity provided by the accelerator. In addition, all the tracking devices will be replaced and, in particular, a scintillating fiber tracker (SciFi) will be installed after the magnet, allowing to cope with the higher occupancy. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for the track reconstruction. This talk presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as KS and Λ. ...

  3. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    DEFF Research Database (Denmark)

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge-preserving...

  4. A new algorithm for 3D reconstruction from support functions

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructio...

  5. A First-Order Primal-Dual Reconstruction Algorithm for Few-View SPECT

    DEFF Research Database (Denmark)

    Wolf, Paul; Jørgensen, Jakob Heide; Gilat-Schmidt, Taly

    2012-01-01

    A sparsity-exploiting algorithm intended for few-view Single Photon Emission Computed Tomography (SPECT) reconstruction is proposed and characterized. The algorithm models the object as piecewise constant subject to a blurring operation. Monte Carlo simulations were performed to provide more proj...

  6. A Homogeneous and Self-Dual Interior-Point Linear Programming Algorithm for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders

    2015-01-01

    We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... system of equations solved in homogeneous and self-dual IPMs. Fast convergence is further achieved using a warm-start strategy. We implement the algorithm in MATLAB and C. Its performance is tested using a conceptual power management case study. Closed loop simulations show that 1) the proposed algorithm...

  7. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  8. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  9. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  10. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    Science.gov (United States)

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  11. New reconstruction algorithm in helical-volume CT

    International Nuclear Information System (INIS)

    Toki, Y.; Rifu, T.; Aradate, H.; Hirao, Y.; Ohyama, N.

    1990-01-01

    This paper reports on helical scanning that is an application of continuous scanning CT to acquire volume data in a short time for three-dimensional study. In a helical scan, the patient couch sustains movement during continuous-rotation scanning and then the acquired data is processed to synthesize a projection data set of vertical section by interpolation. But the synthesized section is not thin enough; also, the image may have artifacts caused by couch movement. A new reconstruction algorithm that helps resolve such problems has been developed and compared with the ordinary algorithm. The authors constructed a helical scan system based on TCT-900S, which can perform 1-second rotation continuously for 30 seconds. The authors measured section thickness using both algorithms on an AAPM phantom, and we also compared degree of artifacts on clinical data

  12. High resolution reconstruction of PET images using the iterative OSEM algorithm

    International Nuclear Information System (INIS)

    Doll, J.; Bublitz, O.; Werling, A.; Haberkorn, U.; Semmler, W.; Adam, L.E.; Pennsylvania Univ., Philadelphia, PA; Brix, G.

    2004-01-01

    Aim: Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. Methods: All measurements were performed at the whole-body PET system ECAT EXACT HR + in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Results: Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. Conclusion: The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals. (orig.)

  13. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    Science.gov (United States)

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A theoretically exact reconstruction algorithm for helical cone-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Li Jing; Sun Yi; Zhu Peiping

    2013-01-01

    Differential phase-contrast computed tomography (DPC-CT) reconstruction problems are usually solved by using parallel-, fan- or cone-beam algorithms. For rod-shaped objects, the x-ray beams cannot recover all the slices of the sample at the same time. Thus, if a rod-shaped sample is required to be reconstructed by the above algorithms, one should alternately perform translation and rotation on this sample, which leads to lower efficiency. The helical cone-beam CT may significantly improve scanning efficiency for rod-shaped objects over other algorithms. In this paper, we propose a theoretically exact filter-backprojection algorithm for helical cone-beam DPC-CT, which can be applied to reconstruct the refractive index decrement distribution of the samples directly from two-dimensional differential phase-contrast images. Numerical simulations are conducted to verify the proposed algorithm. Our work provides a potential solution for inspecting the rod-shaped samples using DPC-CT, which may be applicable with the evolution of DPC-CT equipments. (paper)

  15. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    Science.gov (United States)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  16. Greedy algorithms for diffuse optical tomography reconstruction

    Science.gov (United States)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of

  17. Fast algorithm of track reconstruction for the Delphy TPC

    International Nuclear Information System (INIS)

    Maillard, J.

    1984-01-01

    We describe a simple geometrical method (polar inversion) to reconstruct tracks. When the magnetic field is constant in magnitude and direction. This method uses geometrical properties of the trajectories. In the case of the DELPHI apparatus, the track reconstruction is done using TPC informations. After explaining the algorithm, we give results on ''GEANT'' simulated events using the ''Lund'' generator. Today we get a computer time of the order of 1.2 milliseconds on a CDC 7600 and an efficiency of 98% [fr

  18. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  19. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  20. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  1. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  2. A practical algorithm for reconstructing level-1 phylogenetic networks

    NARCIS (Netherlands)

    Huber, K.T.; Iersel, van L.J.J.; Kelk, S.M.; Suchecki, R.

    2011-01-01

    Recently, much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here, we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks-a type of network

  3. A general algorithm for the reconstruction of jet events in e+e- annihilation

    International Nuclear Information System (INIS)

    Goddard, M.C.

    1981-01-01

    A general method is described to reconstruct a predetermined number of jets. It can reconstruct the jet axes as accurately as any existing algorithm and is up to one hundred times faster. Results are shown from the reconstruction of 2-jet, 3-jet and 4-jet Monte Carlo events. (author)

  4. Practical algorithms for simulation and reconstruction of digital in-line holograms.

    Science.gov (United States)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2015-03-20

    Here we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase-shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  5. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  6. A Fourier reconstruction algorithm with constant attenuation compensation using 1800 acquisition data for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2007-01-01

    In this paper, we develop an approximate analytical reconstruction algorithm that compensates for uniform attenuation in 2D parallel-beam SPECT with a 180 0 acquisition. This new algorithm is in the form of a direct Fourier reconstruction. The complex variable central slice theorem is used to derive this algorithm. The image is reconstructed with the following steps: first, the attenuated projection data acquired over 180 deg. are extended to 360 deg. and the value for the uniform attenuator is changed to a negative value. The Fourier transform (FT) of the image in polar coordinates is obtained from the Fourier transform of an analytic function interpolated from an extension of the projection data according to the complex central slice theorem. Finally, the image is obtained by performing a 2D inverse Fourier transform. Computer simulations and comparison studies with a 360 deg. full-scan algorithm are provided

  7. The SRT reconstruction algorithm for semiquantification in PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  8. The SRT reconstruction algorithm for semiquantification in PET imaging

    International Nuclear Information System (INIS)

    Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.

    2015-01-01

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of 18 F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  9. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  10. Blind spectrum reconstruction algorithm with L0-sparse representation

    International Nuclear Information System (INIS)

    Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Shu, Jiangbo; Liu, Tingting; Zhang, Tianxu

    2015-01-01

    Raman spectrum often suffers from band overlap and Poisson noise. This paper presents a new blind Poissonian Raman spectrum reconstruction method, which incorporates the L 0 -sparse prior together with the total variation constraint into the maximum a posteriori framework. Furthermore, the greedy analysis pursuit algorithm is adopted to solve the L 0 -based minimization problem. Simulated and real spectrum experimental results show that the proposed method can effectively preserve spectral structure and suppress noise. The reconstructed Raman spectra are easily used for interpreting unknown chemical mixtures. (paper)

  11. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  12. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  13. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  14. New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers.

    Science.gov (United States)

    Rodriguez-Ruiz, Alejandro; Teuwen, Jonas; Vreemann, Suzan; Bouwman, Ramona W; van Engen, Ruben E; Karssemeijer, Nico; Mann, Ritse M; Gubern-Merida, Albert; Sechopoulos, Ioannis

    2017-01-01

    Background The image quality of digital breast tomosynthesis (DBT) volumes depends greatly on the reconstruction algorithm. Purpose To compare two DBT reconstruction algorithms used by the Siemens Mammomat Inspiration system, filtered back projection (FBP), and FBP with iterative optimizations (EMPIRE), using qualitative analysis by human readers and detection performance of machine learning algorithms. Material and Methods Visual grading analysis was performed by four readers specialized in breast imaging who scored 100 cases reconstructed with both algorithms (70 lesions). Scoring (5-point scale: 1 = poor to 5 = excellent quality) was performed on presence of noise and artifacts, visualization of skin-line and Cooper's ligaments, contrast, and image quality, and, when present, lesion visibility. In parallel, a three-dimensional deep-learning convolutional neural network (3D-CNN) was trained (n = 259 patients, 51 positives with BI-RADS 3, 4, or 5 calcifications) and tested (n = 46 patients, nine positives), separately with FBP and EMPIRE volumes, to discriminate between samples with and without calcifications. The partial area under the receiver operating characteristic curve (pAUC) of each 3D-CNN was used for comparison. Results EMPIRE reconstructions showed better contrast (3.23 vs. 3.10, P = 0.010), image quality (3.22 vs. 3.03, P algorithm provides DBT volumes with better contrast and image quality, fewer artifacts, and improved visibility of calcifications for human observers, as well as improved detection performance with deep-learning algorithms.

  15. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2005-01-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections

  16. A fully three-dimensional reconstruction algorithm with the nonstationary filter for improved single-orbit cone beam SPECT

    International Nuclear Information System (INIS)

    Cao, Z.J.; Tsui, B.M.

    1993-01-01

    Conventional single-orbit cone beam tomography presents special problems. They include incomplete sampling and inadequate three-dimensional (3D) reconstruction algorithm. The commonly used Feldkamp reconstruction algorithm simply extends the two-dimensional (2D) fan beam algorithm to 3D cone beam geometry. A truly 3D reconstruction formulation has been derived for the single-orbit cone beam SPECT based on the 3D Fourier slice theorem. In the formulation, a nonstationary filter which depends on the distance from the central plane of the cone beam was derived. The filter is applied to the 2D projection data in directions along and normal to the axis-of-rotation. The 3D reconstruction algorithm with the nonstationary filter was evaluated using both computer simulation and experimental measurements. Significant improvement in image quality was demonstrated in terms of decreased artifacts and distortions in cone beam reconstructed images. However, compared with the Feldkamp algorithm, a five-fold increase in processing time is required. Further improvement in image quality needs complete sampling in frequency space

  17. Three-Dimensional Reconstruction of Sandpile Interiors

    Science.gov (United States)

    Seidler, G. T.

    2001-03-01

    The granular bed, or sandpile, has become one of the condensed matter physicist's favorite systems. In addition to conceptual appeal, the simplest sandpile of monodisperse hard spheres is a valuable model system for understanding powders, liquids, and metallic glasses. Any fundamental approach to the transport and mechanical properties of three-dimensional mesoscale disordered materials must follow from a thorough understanding of their structure. However, in the overwhelming majority of cases, structure measurements have been limited to the mean filling fraction and the structural autocorrelation function. This is particularly unfortunate in the ongoing sandpile renaissance, where some of the most interesting questions concern structure and the relationship between structure and dynamics. I will discuss the combination of synchrotron x-ray microtomography and computer vision algorithms to perform three-dimensional virtual reconstructions of real sandpiles. This technique is rapid and noninvasive, and is applicable to samples large enough to separate bulk and boundary properties. The resulting complete knowledge of structure can be used to calculate otherwise inaccessible correlation functions. I will present results for several measures of the bond-orientational order in three-dimensional sandpiles, including fabric tensors and nematic order parameters.

  18. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    International Nuclear Information System (INIS)

    Vaegler, Sven; Sauer, Otto; Stsepankou, Dzmitry; Hesser, Juergen

    2015-01-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  19. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    Science.gov (United States)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  20. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  1. An FDK-like cone-beam SPECT reconstruction algorithm for non-uniform attenuated projections acquired using a circular trajectory

    International Nuclear Information System (INIS)

    Huang, Q; Zeng, G L; You, J; Gullberg, G T

    2005-01-01

    In this paper, Novikov's inversion formula of the attenuated two-dimensional (2D) Radon transform is applied to the reconstruction of attenuated fan-beam projections acquired with equal detector spacing and of attenuated cone-beam projections acquired with a flat planar detector and circular trajectory. The derivation of the fan-beam algorithm is obtained by transformation from parallel-beam coordinates to fan-beam coordinates. The cone-beam reconstruction algorithm is an extension of the fan-beam reconstruction algorithm using Feldkamp-Davis-Kress's (FDK) method. Computer simulations indicate that the algorithm is efficient and is accurate in reconstructing slices close to the central slice of the cone-beam orbit plane. When the attenuation map is set to zero the implementation is equivalent to the FDK method. Reconstructed images are also shown for noise corrupted projections

  2. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    International Nuclear Information System (INIS)

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  3. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Grootjans, Willem [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology; Meeuwis, Antoi P.W.; Gotthardt, Martin; Visser, Eric P. [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Slump, Cornelis H. [Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Geus-Oei, Lioe-Fee de [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology

    2016-07-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17 mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4

  4. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Glaser, Daniel [Division of Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology, Stockholm (Sweden); Romeijn, H Edwin [Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117 (United States); Dempsey, James F, E-mail: aleman@mie.utoronto.c, E-mail: romeijn@umich.ed, E-mail: jfdempsey@viewray.co [ViewRay, Inc. 2 Thermo Fisher Way, Village of Oakwood, OH 44146 (United States)

    2010-09-21

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  5. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    International Nuclear Information System (INIS)

    Aleman, Dionne M; Glaser, Daniel; Romeijn, H Edwin; Dempsey, James F

    2010-01-01

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  6. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  7. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    Science.gov (United States)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  8. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    Science.gov (United States)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the

  9. An efficient reconstruction algorithm for differential phase-contrast tomographic images from a limited number of views

    International Nuclear Information System (INIS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Gupta, Rajiv; Ando, Masami

    2015-01-01

    The main focus of this paper is reconstruction of tomographic phase-contrast image from a set of projections. We propose an efficient reconstruction algorithm for differential phase-contrast computed tomography that can considerably reduce the number of projections required for reconstruction. The key result underlying this research is a projection theorem that states that the second derivative of the projection set is linearly related to the Laplacian of the tomographic image. The proposed algorithm first reconstructs the Laplacian image of the phase-shift distribution from the second-derivative of the projections using total variation regularization. The second step is to obtain the phase-shift distribution by solving a Poisson equation whose source is the Laplacian image previously reconstructed under the Dirichlet condition. We demonstrate the efficacy of this algorithm using both synthetically generated simulation data and projection data acquired experimentally at a synchrotron. The experimental phase data were acquired from a human coronary artery specimen using dark-field-imaging optics pioneered by our group. Our results demonstrate that the proposed algorithm can reduce the number of projections to approximately 33% as compared with the conventional filtered backprojection method, without any detrimental effect on the image quality

  10. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    Science.gov (United States)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  11. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning

    International Nuclear Information System (INIS)

    Tang Xiangyang; Hsieh Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-01-01

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 deg.) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  12. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  13. A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb

    CERN Document Server

    Dendek, Adam Mateusz

    2018-01-01

    A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb 5 Jun 2018, 16:00 1h 30m Library, Centro San Domenico () LHC experiments Posters session Speaker Katharina Mueller (Universitaet Zuerich (CH)) Description The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. The track reconstruction procedure is performed by a number of algorithms. One of these, PatLongLivedTracking, is optimised to reconstruct "downstream tracks", which are tracks originating from decays outside the LHCb vertex detector of long-lived particles, such as Ks or Λ0. After an overview of the LHCb tracking system, we provide a detailed description of the LHCb downstream track reconstruction algorithm. Its computational intelligence part is described in details, including the adaptation of the employed...

  14. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  15. Study on the effects of sample selection on spectral reflectance reconstruction based on the algorithm of compressive sensing

    International Nuclear Information System (INIS)

    Zhang, Leihong; Liang, Dong

    2016-01-01

    In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)

  16. A Class of Large-Update and Small-Update Primal-Dual Interior-Point Algorithms for Linear Optimization

    NARCIS (Netherlands)

    Bai, Y.Q.; Lesaja, G.; Roos, C.; Wang, G.Q.; El Ghami, M.

    2008-01-01

    In this paper we present a class of polynomial primal-dual interior-point algorithms for linear optimization based on a new class of kernel functions. This class is fairly general and includes the classical logarithmic function, the prototype self-regular function, and non-self-regular kernel

  17. An efficient algorithm for reconstruction of spect images in the presence of spatially varying attenuation

    International Nuclear Information System (INIS)

    Zeeberg, B.R.; Bacharach, S.; Carson, R.; Green, M.V.; Larson, S.M.; Soucaille, J.F.

    1985-01-01

    An algorithm is presented which permits the reconstruction of SPECT images in the presence of spatially varying attenuation. The algorithm considers the spatially variant attenuation as a perturbation of the constant attenuation case and computes a reconstructed image and a correction image to estimate the effects of this perturbation. The corrected image will be computed from these two images and is of comparable quality both visually and quantitatively to those simulated for zero or constant attenuation taken as standard reference images. In addition, the algorithm is time efficient, in that the time required is approximately 2.5 times that for a standard convolution-back projection algorithm

  18. Identifying Septal Support Reconstructions for Saddle Nose Deformity: The Cakmak Algorithm.

    Science.gov (United States)

    Cakmak, Ozcan; Emre, Ismet Emrah; Ozkurt, Fazil Emre

    2015-01-01

    The saddle nose deformity is one of the most challenging problems in nasal surgery with a less predictable and reproducible result than other nasal procedures. The main feature of this deformity is loss of septal support with both functional and aesthetic implications. Most reports on saddle nose have focused on aesthetic improvement and neglected the reestablishment of septal support to improve airway. To explain how the Cakmak algorithm, an algorithm that describes various fixation techniques and grafts in different types of saddle nose deformities, aids in identifying saddle nose reconstructions that restore supportive nasal framework and provide the aesthetic improvements typically associated with procedures to correct saddle nose deformities. This algorithm presents septal support reconstruction of patients with saddle nose deformity based on the experience of the senior author in 206 patients with saddle nose deformity. Preoperative examination, intraoperative assessment, reconstruction techniques, graft materials, and patient evaluation of aesthetic success were documented, and 4 different types of saddle nose deformities were defined. The Cakmak algorithm classifies varying degrees of saddle nose deformity from type 0 to type 4 and helps identify the most appropriate surgical procedure to restore the supportive nasal framework and aesthetic dorsum. Among the 206 patients, 110 women and 96 men, mean (range) age was 39.7 years (15-68 years), and mean (range) of follow-up was 32 months (6-148 months). All but 12 patients had a history of previous nasal surgeries. Application of the Cakmak algorithm resulted in 36 patients categorized with type 0 saddle nose deformities; 79, type 1; 50, type 2; 20, type 3a; 7, type 3b; and 14, type 4. Postoperative photographs showed improvement of deformities, and patient surveys revealed aesthetic improvement in 201 patients and improvement in nasal breathing in 195 patients. Three patients developed postoperative infection

  19. Algorithm for three dimension reconstruction of magnetic resonance tomographs and X-ray images based on Fast Fourier Transform

    International Nuclear Information System (INIS)

    Bueno, Josiane M.; Traina, Agma Juci M.; Cruvinel, Paulo E.

    1995-01-01

    This work presents an algorithm for three-dimensional digital image reconstruction. Such algorithms based on the combination of both a Fast Fourier Transform method with Hamming Window and the use of a tri-linear interpolation function. The algorithm allows not only the generation of three-dimensional spatial spin distribution maps for Magnetic Resonance Tomography data but also X and Y-rays linear attenuation coefficient maps for CT scanners. Results demonstrates the usefulness of the algorithm in three-dimensional image reconstruction by doing first two-dimensional reconstruction and rather after interpolation. The algorithm was developed in C++ language, and there are two available versions: one under the DOS environment, and the other under the UNIX/Sun environment. (author)

  20. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  1. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization s...

  2. A faster ordered-subset convex algorithm for iterative reconstruction in a rotation-free micro-CT system

    International Nuclear Information System (INIS)

    Quan, E; Lalush, D S

    2009-01-01

    We present a faster iterative reconstruction algorithm based on the ordered-subset convex (OSC) algorithm for transmission CT. The OSC algorithm was modified such that it calculates the normalization term before the iterative process in order to save computational cost. The modified version requires only one backprojection per iteration as compared to two required for the original OSC. We applied the modified OSC (MOSC) algorithm to a rotation-free micro-CT system that we proposed previously, observed its performance, and compared with the OSC algorithm for 3D cone-beam reconstruction. Measurements on the reconstructed images as well as the point spread functions show that MOSC is quite similar to OSC; in noise-resolution trade-off, MOSC is comparable with OSC in a regular-noise situation and it is slightly worse than OSC in an extremely high-noise situation. The timing record shows that MOSC saves 25-30% CPU time, depending on the number of iterations used. We conclude that the MOSC algorithm is more efficient than OSC and provides comparable images.

  3. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-03-01

    Full Text Available A sub-block algorithm is usually applied in the super-resolution (SR reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  4. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy.

    Directory of Open Access Journals (Sweden)

    Chae Young Lee

    Full Text Available The purposes of this study were to optimize a proton computed tomography system (pCT for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

  5. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    Science.gov (United States)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  6. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  7. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  8. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  9. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    Science.gov (United States)

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  10. An improved muon reconstruction algorithm for INO-ICAL experiment

    International Nuclear Information System (INIS)

    Bhattacharya, Kolahal; MandaI, Naba K.

    2013-01-01

    The charge current interaction of neutrino in INO-ICAL detector will be identified with a muon (μ ± ) in the detector whose kinematics is related with the kinematics of the neutrino. So, muon reconstruction is a very important step in achieving INO physics goals. The existing muon reconstruction package for INO-ICAL has poor performance in specific regimes of experimental interest: (a) for larger zenith angle (θ > 50°), (b) for lower energies (E < 1 GeV); mainly due to poor error propagation scheme insensitive to energy E, angle (θ, φ) and inhomogeneous magnetic field along the muon track. Since, a significant fraction of muons from atmospheric neutrino interactions will have initial energy < 1 GeV and almost uniform distribution in cosθ a robust package for muon reconstruction is essential. We have implemented higher order correction terms in the propagation of the state and error covariance matrices of the Kalman Iter. The algorithm ensures track element merging in most cases and also increases reconstruction efficiency. The performance of this package will be presented in comparison with the previous one. (author)

  11. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  12. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  13. A new hybrid-FBP inversion algorithm with inverse distance backprojection weight for CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Narasimhadhan, A.V.; Rajgopal, Kasi

    2011-07-01

    This paper presents a new hybrid filtered backprojection (FBP) algorithm for fan-beam and cone-beam scan. The hybrid reconstruction kernel is the sum of the ramp and Hilbert filters. We modify the redundancy weighting function to reduce the inverse square distance weighting in the backprojection to inverse distance weight. The modified weight also eliminates the derivative associated with the Hilbert filter kernel. Thus, the proposed reconstruction algorithm has the advantages of the inverse distance weight in the backprojection. We evaluate the performance of the new algorithm in terms of the magnitude level and uniformity in noise for the fan-beam geometry. The computer simulations show that the spatial resolution is nearly identical to the standard fan-beam ramp filtered algorithm while the noise is spatially uniform and the noise variance is reduced. (orig.)

  14. Reconstruction algorithm medical imaging DRR; Algoritmo de construccion de imagenes medicas DRR

    Energy Technology Data Exchange (ETDEWEB)

    Estrada Espinosa, J. C.

    2013-07-01

    The method of reconstruction for digital radiographic Imaging (DRR), is based on two orthogonal images, on the dorsal and lateral decubitus position of the simulation. DRR images are reconstructed with an algorithm that simulates running a conventional X-ray, a single rendition team, beam emitted is not divergent, in this case, the rays are considered to be parallel in the image reconstruction DRR, for this purpose, it is necessary to use all the values of the units (HU) hounsfield of each voxel in all axial cuts that form the study TC, finally obtaining the reconstructed image DRR performing a transformation from 3D to 2D. (Author)

  15. A new approximate algorithm for image reconstruction in cone-beam spiral CT at small cone-angles

    International Nuclear Information System (INIS)

    Schaller, S.; Flohr, T.; Steffen, P.

    1996-01-01

    This paper presents a new approximate algorithm for image reconstruction with cone-beam spiral CT data at relatively small cone-angles. Based on the algorithm of Wang et al., our method combines a special complementary interpolation with filtered backprojection. The presented algorithm has three main advantages over Wang's algorithm: (1) It overcomes the pitch limitation of Wang's algorithm. (2) It significantly improves z-resolution when suitable sampling schemes are applied. (3) It avoids the waste of applied radiation dose inherent to Wang's algorithm. Usage of the total applied dose is an important requirement in medical imaging. Our method has been implemented on a standard workstation. Reconstructions of computer-simulated data of different phantoms, assuming sampling conditions and image quality requirements typical to medical CT, show encouraging results

  16. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaoming, E-mail: mmayzy2008@126.com; Li, Tao, E-mail: litaofeivip@163.com; Li, Xin, E-mail: lx0803@sina.com.cn; Zhou, Weihua, E-mail: wangxue0606@gmail.com

    2015-05-15

    Highlights: • High-resolution scan mode is appropriate for imaging coronary stent. • HD-detail reconstruction algorithm is stent-dedicated kernel. • The intrastent lumen visibility also depends on stent diameter and material. - Abstract: Objective: The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Materials and methods: Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. Results: The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0 ± 8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Conclusions: Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement.

  17. A GREIT-type linear reconstruction algorithm for EIT using eigenimages

    International Nuclear Information System (INIS)

    Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen

    2013-01-01

    Reconstruction in electrical impedance tomography (EIT) is a nonlinear, ill-posed inverse problem. Based on point-shaped training and evaluation data, the 'Graz consensus Reconstruction algorithm for EIT' (GREIT) constitutes a universal, homogenous method. While this is a very reasonable approach to the general problem, we ask the question if an optimized reconstruction method for a specific application of EIT, i.e. thoracic imaging, can be found. Instead of point-shaped training data we propose to use spatially extended training data consisting of eigenimages. To evaluate the quality of reconstruction of the proposed approach, figures of merit (FOMs) derived from the ones used in GREIT are developed. For the application of thoracic imaging, lung-shapes were segmented from a publicly available CT-database (www.dir-lab.com) and used to calculate the novel FOMs. With those, the general feasibility of using eigenimages is demonstrated and compared to the standard approach. In addition, it is shown that by using different sets of training data, the creation of an individually optimized linear method of reconstruction is possible.

  18. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yuan, E-mail: yuan.lin@duke.edu; Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Suite 302, Durham, North Carolina 27705 (United States)

    2014-02-15

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  19. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    International Nuclear Information System (INIS)

    Lin, Yuan; Samei, Ehsan

    2014-01-01

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  20. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J [Lehigh Valley Health Network, Allentown, PA (United States)

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  1. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J

    2014-01-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  2. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    Science.gov (United States)

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  3. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  5. An evolutionary algorithm for tomographic reconstructions in limited data sets problems

    International Nuclear Information System (INIS)

    Turcanu, Catrinel; Craciunescu, Teddy

    2000-01-01

    The paper proposes a new method for tomographic reconstructions. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited angle views. The problem of image reconstruction from projections may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by an evolutionary algorithm. Our algorithm has some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated in comparison with a traditional tomographic method, based on the maximization of the entropy of the image, that proved to work well with limited data sets. The test phantom is typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise

  6. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  7. Reconstruction algorithms in the Super-Kamiokande large water Cherenkov detector

    CERN Document Server

    Shiozawa, M

    1999-01-01

    The Super-Kamiokande experiment, using a large underground water Cherenkov detector, has started its operation since first April, 1996. One of the main physics goals of this experiment is to measure the atmospheric neutrinos. Proton decay search is also an important topic. For these analyses, all measurement of physical quantities of an event such as vertex position, the number of Cherenkov rings, momentum, particle type and the number of decay electrons, is automatically performed by reconstruction algorithms. We attain enough quality of the analyses using these algorithms and several impressive results have been addressed.

  8. Reconstruction algorithms in the Super-Kamiokande large water Cherenkov detector

    International Nuclear Information System (INIS)

    Shiozawa, M.

    1999-01-01

    The Super-Kamiokande experiment, using a large underground water Cherenkov detector, has started its operation since first April, 1996. One of the main physics goals of this experiment is to measure the atmospheric neutrinos. Proton decay search is also an important topic. For these analyses, all measurement of physical quantities of an event such as vertex position, the number of Cherenkov rings, momentum, particle type and the number of decay electrons, is automatically performed by reconstruction algorithms. We attain enough quality of the analyses using these algorithms and several impressive results have been addressed

  9. An Algorithmic Approach for the Reconstruction of Nasal Skin Defects: Retrospective Analysis of 130 Cases

    Directory of Open Access Journals (Sweden)

    Berrak Akşam

    2016-06-01

    Full Text Available Objective: Most of the malignant cutaneous carcinomas are seen in the nasal region. Reconstruction of nasal defects is challenging because of the unique anatomic properties and complex structure of this region. In this study, we present our algorithm for the nasal skin defects that occurred after malignant skin tumor excisions. Material and Methods: Patients whose nasal skin was reconstructed after malignant skin tumor excision were included in the study. These patients were evaluated by their age, gender, comorbities, tumor location, tumor size, reconstruction type, histopathological diagnosis, and tumor recurrence. Results: A total of 130 patients (70 female, 60 male were evaluated. The average age of the patients was 67.8 years. Tumors were located mostly at the dorsum, alar region, and tip of the nose. When reconstruction methods were evaluated, primary closure was preferred in 14.6% patients, full thickness skin grafts were used in 25.3% patients, and reconstruction with flaps were the choice in 60% patients. Different flaps were used according to the subunits. Mostly, dorsal nasal flaps, bilobed flaps, nasolabial flaps, and forehead flaps were used. Conclusion: The defect-only reconstruction principle was accepted in this study. Previously described subunits, such as the dorsum, tip, alar region, lateral wall, columella, and soft triangles, of the nose were further divided into subregions by their anatomical relations. An algorithm was planned with these sub regions. In nasal skin reconstruction, this algorithm helps in selection the methods for the best results and minimize the complications.

  10. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...

  11. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    Science.gov (United States)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  12. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    Directory of Open Access Journals (Sweden)

    Peigang Ning

    Full Text Available OBJECTIVE: This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR and model-based iterative reconstruction (MBIR algorithms in reducing computed tomography (CT radiation dosages in abdominal imaging. METHODS: CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP, 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol were recorded. RESULTS: At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. CONCLUSIONS: Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  13. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    Science.gov (United States)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  14. Strategies of reconstruction algorithms for computerized tomography

    International Nuclear Information System (INIS)

    Garderet, P.

    1984-10-01

    Image reconstruction from projections has progressively spread out over all fields of medical imaging. As the mathematical aspects of the problem become more and more comprehensively explored a great variety of numerical solutions have been developed best suited to such-and-such imaging medical application and taking into account the physical phenomena related to data collection (a priori properties for signal and noise). The purpose of that survey is to present the general mathematical frame and the fundamental assumptions of various strategies; Fourier methods approximate explicit deterministic inversion formula for the Radon transform. Algebraic reconstruction techniques set up an a priori discrete model through a series expansion approach of the solution. The numerical system to be solved is huge when a fine grid of pixels is to be reconstructed; iterative solutions may then be found. Recently some least square procedures have been shown to be tractable which avoid the use of iterative methods. Finally maximum like hood approach incorporates accurately the Poisson nature of photon noise and are well adapted to emission computed tomography. The various strategies will be analysed from both aspects of theoretical assumptions needed for suitable use and of computing facilities, actual performance and cost. In the end we take a glimpse of the extension of the algorithms from two dimensional imaging to fully three dimensional volume analysis in preparation of the future medical imaging technologies

  15. Atomic force microscopy imaging and 3-D reconstructions of serial thin sections of a single cell and its interior structures

    International Nuclear Information System (INIS)

    Chen Yong; Cai Jiye; Zhao Tao; Wang Chenxi; Dong Shuo; Luo Shuqian; Chen, Zheng W.

    2005-01-01

    The thin sectioning has been widely applied in electron microscopy (EM), and successfully used for an in situ observation of inner ultrastructure of cells. This powerful technique has recently been extended to the research field of atomic force microscopy (AFM). However, there have been no reports describing AFM imaging of serial thin sections and three-dimensional (3-D) reconstruction of cells and their inner structures. In the present study, we used AFM to scan serial thin sections approximately 60 nm thick of a mouse embryonic stem (ES) cell, and to observe the in situ inner ultrastructure including cell membrane, cytoplasm, mitochondria, nucleus membrane, and linear chromatin. The high-magnification AFM imaging of single mitochondria clearly demonstrated the outer membrane, inner boundary membrane and cristal membrane of mitochondria in the cellular compartment. Importantly, AFM imaging on six serial thin sections of a single mouse ES cell showed that mitochondria underwent sequential changes in the number, morphology and distribution. These nanoscale images allowed us to perform 3-D surface reconstruction of interested interior structures in cells. Based on the serial in situ images, 3-D models of morphological characteristics, numbers and distributions of interior structures of the single ES cells were validated and reconstructed. Our results suggest that the combined AFM and serial-thin-section technique is useful for the nanoscale imaging and 3-D reconstruction of single cells and their inner structures. This technique may facilitate studies of proliferating and differentiating stages of stem cells or somatic cells at a nanoscale

  16. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  17. A fast method to emulate an iterative POCS image reconstruction algorithm.

    Science.gov (United States)

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  18. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  19. A multicenter evaluation of seven commercial ML-EM algorithms for SPECT image reconstruction using simulation data

    International Nuclear Information System (INIS)

    Matsumoto, Keiichi; Ohnishi, Hideo; Niida, Hideharu; Nishimura, Yoshihiro; Wada, Yasuhiro; Kida, Tetsuo

    2003-01-01

    The maximum likelihood expectation maximization (ML-EM) algorithm has become available as an alternative to filtered back projection in SPECT. The actual physical performance may be different depending on the manufacturer and model, because of differences in computational details. The purpose of this study was to investigate the characteristics of seven different types of ML-EM algorithms using simple simulation data. Seven ML-EM algorithm programs were used: Genie (GE), esoft (Siemens), HARP-III (Hitachi), GMS-5500UI (Toshiba), Pegasys (ADAC), ODYSSEY-FX (Marconi), and Windows-PC (original software). Projection data of a 2-pixel-wide line source in the center of the field of view were simulated without attenuation or scatter. Images were reconstructed with ML-EM by changing the number of iterations from 1 to 45 for each algorithm. Image quality was evaluated after a reconstruction using full width at half maximum (FWHM), full width at tenth maximum (FWTM), and the total counts of the reconstructed images. In the maximum number of iterations, the difference in the FWHM value was up to 1.5 pixels, and that of FWTM, no less than 2.0 pixels. The total counts of the reconstructed images in the initial few iterations were larger or smaller than the converged value depending on the initial values. Our results for the simplest simulation data suggest that each ML-EM algorithm itself provides a simulation image. We should keep in mind which algorithm is being used and its computational details, when physical and clinical usefulness are compared. (author)

  20. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    Science.gov (United States)

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  1. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm.

    Science.gov (United States)

    Cui, Xiaoming; Li, Tao; Li, Xin; Zhou, Weihua

    2015-05-01

    The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0±8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Development of a new prior knowledge based image reconstruction algorithm for the cone-beam-CT in radiation therapy

    International Nuclear Information System (INIS)

    Vaegler, Sven

    2016-01-01

    The treatment of cancer in radiation therapy is achievable today by techniques that enable highly conformal dose distributions and steep dose gradients. In order to avoid mistreatment, these irradiation techniques have necessitated enhanced patient localization techniques. With an integrated x-ray tube at modern linear accelerators kV-projections can be acquired over a sufficiently large angular space and can be reconstructed to a volumetric image data set from the current situation of the patient prior to irradiation. The so-called Cone-Beam-CT (CBCT) allows a precise verification of patient positioning as well as adaptive radiotherapy. The benefits of an improved patient positioning due to a daily performed CBCT's is contrary to an increased and not negligible radiation exposure of the patient. In order to decrease the radiation exposure, substantial research effort is focused on various dose reduction strategies. Prominent strategies are the decrease of the charge per projection, the reduction of the number of projections as well as the reduction of the acquisition space. Unfortunately, these acquisition schemes lead to images with degraded quality with the widely used Feldkamp-Davis-Kress image reconstruction algorithm. More sophisticated image reconstruction techniques can deal with these dose-reduction strategies without degrading the image quality. A frequently investigated method is the image reconstruction by minimizing the total variation (TV), which is also known as Compressed Sensing (CS). A Compressed Sensing-based reconstruction framework that includes prior images into the reconstruction algorithm is the Prior-Image-Constrained- Compressed-Sensing algorithm (PICCS). The images reconstructed by PICCS outperform the reconstruction results of the conventional Feldkamp-Davis-Kress algorithm (FDK) based method if only a small number of projections are available. However, a drawback of PICCS is that major deviations between prior image data sets and the

  3. DENSE MATCHING COMPARISON BETWEEN CENSUS AND A CONVOLUTIONAL NEURAL NETWORK ALGORITHM FOR PLANT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Y. Xia

    2018-05-01

    Full Text Available 3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  4. Array diagnostics, spatial resolution, and filtering of undesired radiation with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.

    2013-01-01

    This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...

  5. A stand-alone track reconstruction algorithm for the scintillating fibre tracker at the LHCb upgrade

    CERN Multimedia

    Quagliani, Renato

    2017-01-01

    The LHCb upgrade detector project foresees the presence of a scintillating fiber tracker (SciFi) to be used during the LHC Run III, starting in 2020. The instantaneous luminosity will be increased up to $2\\times10^{33}$, five times larger than in Run II and a full software event reconstruction will be performed at the full bunch crossing rate by the trigger. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for track reconstruction. This poster presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_{S}^{0}$ and $\\Lambda$ and allows to greatly enhance the physics potential and capabilities of the LHCb upgrade when compared to its previous implementation.

  6. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  7. Shape reconstruction from apparent contours theory and algorithms

    CERN Document Server

    Bellettini, Giovanni; Paolini, Maurizio

    2015-01-01

    Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

  8. Numerical simulation and optimized design of cased telescoped ammunition interior ballistic

    Directory of Open Access Journals (Sweden)

    Jia-gang Wang

    2018-04-01

    Full Text Available In order to achieve the optimized design of a cased telescoped ammunition (CTA interior ballistic design, a genetic algorithm was introduced into the optimal design of CTA interior ballistics with coupling the CTA interior ballistic model. Aiming at the interior ballistic characteristics of a CTA gun, the goal of CTA interior ballistic design is to obtain a projectile velocity as large as possible. The optimal design of CTA interior ballistic is carried out using a genetic algorithm by setting peak pressure, changing the chamber volume and gun powder charge density. A numerical simulation of interior ballistics based on a 35 mm CTA firing experimental scheme was conducted and then the genetic algorithm was used for numerical optimization. The projectile muzzle velocity of the optimized scheme is increased from 1168 m/s for the initial experimental scheme to 1182 m/s. Then four optimization schemes were obtained with several independent optimization processes. The schemes were compared with each other and the difference between these schemes is small. The peak pressure and muzzle velocity of these schemes are almost the same. The result shows that the genetic algorithm is effective in the optimal design of the CTA interior ballistics. This work will be lay the foundation for further CTA interior ballistic design. Keywords: Cased telescoped ammunition, Interior ballistics, Gunpowder, Optimization genetic algorithm

  9. Track reconstruction algorithms for the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Lebedev, Andrey; Hoehne, Claudia; Kisel, Ivan; Ososkov, Gennady

    2010-01-01

    The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator complex at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors including as tracking detectors the silicon tracking system (STS), the muon detector (MUCH) or alternatively a set of Transition Radiation Detectors (TRD). In this contribution, the status of the track reconstruction software including track finding, fitting and propagation is presented for the MUCH and TRD detectors. The track propagation algorithm takes into account an inhomogeneous magnetic field and includes accurate calculation of multiple scattering and energy losses in the detector material. Track parameters and covariance matrices are estimated using the Kalman filter method and a Kalman filter modification by assigning weights to hits and using simulated annealing. Three different track finder algorithms based on track following have been developed which either allow for track branches, just select nearest hits or use the mentioned weighting method. The track reconstruction efficiency for central Au+Au collisions at 25 AGeV beam energy using events from the UrQMD model is at the level of 93-95% for both detectors.

  10. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  11. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  12. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  13. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    International Nuclear Information System (INIS)

    Bastiens, K.; Lemahieu, I.

    1994-01-01

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors)

  14. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.

    Science.gov (United States)

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun

    2014-07-01

    4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based

  15. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Hao; Folkerts, Michael; Jiang, Steve B., E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun, E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu [Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 (United States); Zhen, Xin [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515 (China); Li, Yongbao [Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 and Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Pan, Tinsu [Department of Imaging Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas 77030 (United States); Cervino, Laura [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States)

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  16. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  17. Application aspects of advanced antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2015-01-01

    This paper focuses on two important applications of the 3D reconstruction algorithm of the commercial software DIATOOL for antenna diagnostics. The first one is the accurate and detailed identification of array malfunctioning, thanks to the available enhanced spatial resolution of the reconstruct...... fields and currents. The second one is the filtering of the scattering from support structures and feed network leakage. Representative experimental results are presented and guidelines on the recommended measurement parameters for obtaining the best diagnostics results are provided....

  18. Analytical algorithm for the generation of polygonal projection data for tomographic reconstruction

    International Nuclear Information System (INIS)

    Davis, G.R.

    1996-01-01

    Tomographic reconstruction algorithms and filters can be tested using a mathematical phantom, that is, a computer program which takes numerical data as its input and outputs derived projection data. The input data is usually in the form of pixel ''densities'' over a regular grid, or position and dimensions of simple, geometrical objects. The former technique allows a greater variety of objects to be simulated, but is less suitable in the case when very small (relative to the ray-spacing) features are to be simulated. The second technique is normally used to simulate biological specimens, typically a human skull, modelled as a number of ellipses. This is not suitable for simulating non-biological specimens with features such as straight edges and fine cracks. We have therefore devised an algorithm for simulating objects described as a series of polygons. These polygons, or parts of them, may be smaller than the ray-spacing and there is no limit, except that imposed by computing resources, on the complexity, number or superposition of polygons. A simple test of such a phantom, reconstructed using the filtered back-projection method, revealed reconstruction artefacts not normally seen with ''biological'' phantoms. (orig.)

  19. A parallel algorithm for 3D particle tracking and Lagrangian trajectory reconstruction

    International Nuclear Information System (INIS)

    Barker, Douglas; Zhang, Yuanhui; Lifflander, Jonathan; Arya, Anshu

    2012-01-01

    Particle-tracking methods are widely used in fluid mechanics and multi-target tracking research because of their unique ability to reconstruct long trajectories with high spatial and temporal resolution. Researchers have recently demonstrated 3D tracking of several objects in real time, but as the number of objects is increased, real-time tracking becomes impossible due to data transfer and processing bottlenecks. This problem may be solved by using parallel processing. In this paper, a parallel-processing framework has been developed based on frame decomposition and is programmed using the asynchronous object-oriented Charm++ paradigm. This framework can be a key step in achieving a scalable Lagrangian measurement system for particle-tracking velocimetry and may lead to real-time measurement capabilities. The parallel tracking algorithm was evaluated with three data sets including the particle image velocimetry standard 3D images data set #352, a uniform data set for optimal parallel performance and a computational-fluid-dynamics-generated non-uniform data set to test trajectory reconstruction accuracy, consistency with the sequential version and scalability to more than 500 processors. The algorithm showed strong scaling up to 512 processors and no inherent limits of scalability were seen. Ultimately, up to a 200-fold speedup is observed compared to the serial algorithm when 256 processors were used. The parallel algorithm is adaptable and could be easily modified to use any sequential tracking algorithm, which inputs frames of 3D particle location data and outputs particle trajectories

  20. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  1. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    Energy Technology Data Exchange (ETDEWEB)

    Bastiens, K; Lemahieu, I [University of Ghent - ELIS Department, St. Pietersnieuwstraat 41, B-9000 Ghent (Belgium)

    1994-12-31

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors). 8 refs, 3 figs, 1 tab.

  2. Evaluation of noise and blur effects with SIRT-FISTA-TV reconstruction algorithm: Application to fast environmental transmission electron tomography.

    Science.gov (United States)

    Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita

    2018-06-01

    Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Energy reconstruction and calibration algorithms for the ATLAS electromagnetic calorimeter

    CERN Document Server

    Delmastro, M

    2003-01-01

    The work of this thesis is devoted to the study, development and optimization of the algorithms of energy reconstruction and calibration for the electromagnetic calorimeter (EMC) of the ATLAS experiment, presently under installation and commissioning at the CERN Large Hadron Collider in Geneva (Switzerland). A deep study of the electrical characteristics of the detector and of the signals formation and propagation is conduced: an electrical model of the detector is developed and analyzed through simulations; a hardware model (mock-up) of a group of the EMC readout cells has been built, allowing the direct collection and properties study of the signals emerging from the EMC cells. We analyze the existing multiple-sampled signal reconstruction strategy, showing the need of an improvement in order to reach the advertised performances of the detector. The optimal filtering reconstruction technique is studied and implemented, taking into account the differences between the ionization and calibration waveforms as e...

  4. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.

    Science.gov (United States)

    López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.

  5. Interior near-field acoustical holography in flight.

    Science.gov (United States)

    Williams, E G; Houston, B H; Herdic, P C; Raveendra, S T; Gardner, B

    2000-10-01

    In this paper boundary element methods (BEM) are mated with near-field acoustical holography (NAH) in order to determine the normal velocity over a large area of a fuselage of a turboprop airplane from a measurement of the pressure (hologram) on a concentric surface in the interior of the aircraft. This work represents the first time NAH has been applied in situ, in-flight. The normal fuselage velocity was successfully reconstructed at the blade passage frequency (BPF) of the propeller and its first two harmonics. This reconstructed velocity reveals structure-borne and airborne sound-transmission paths from the engine to the interior space.

  6. Quantitative analysis of emphysema in 3D using MDCT: Influence of different reconstruction algorithms

    International Nuclear Information System (INIS)

    Ley-Zaporozhan, Julia; Ley, Sebastian; Weinheimer, Oliver; Iliyushenko, Svitlana; Erdugan, Serap; Eberhardt, Ralf; Fuxa, Adelheid; Mews, Juergen; Kauczor, Hans-Ulrich

    2008-01-01

    Purpose: The aim of the study was to compare the influence of different reconstruction algorithms on quantitative emphysema analysis in patients with severe emphysema. Material and methods: Twenty-five patients suffering from severe emphysema were included in the study. All patients underwent inspiratory MDCT (Aquilion-16, slice thickness 1/0.8 mm). The raw data were reconstructed using six different algorithms: bone kernel with beam hardening correction (BHC), soft tissue kernel with BHC; standard soft tissue kernel, smooth soft tissue kernel (internal reference standard), standard lung kernel, and high-convolution kernel. The only difference between image data sets was the algorithm employed to reconstruct the raw data, no additional radiation was required. CT data were analysed using self-written emphysema detection and quantification software providing lung volume, emphysema volume (EV), emphysema index (EI) and mean lung density (MLD). Results: The use of kernels with BHC led to a significant decrease in MLD (5%) and EI (61-79%) in comparison with kernels without BHC. The absolute difference (from smooth soft tissue kernel) in MLD ranged from -0.6 to -6.1 HU and were significant different for all kernels. The EV showed absolute differences between -0.05 and -0.4 L and was significantly different for all kernels. The EI showed absolute differences between -0.8 and -5.1 and was significantly different for all kernels. Conclusion: The use of kernels with BHC led to a significant decrease in MLD and EI. The absolute differences between different kernels without BHC were small but they were larger than the known interscan variation in patients. Thus, for follow-up examinations the same reconstruction algorithm has to be used and use of BHC has to be avoided

  7. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  8. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    International Nuclear Information System (INIS)

    Gomez-Cardona, Daniel; Nagle, Scott K.; Li, Ke; Chen, Guang-Hong; Robinson, Terry E.

    2015-01-01

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo TM , GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  9. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Cardona, Daniel [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Nagle, Scott K. [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Department of Pediatrics, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Li, Ke; Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Robinson, Terry E. [Department of Pediatrics, Stanford School of Medicine, 770 Welch Road, Palo Alto, California 94304 (United States)

    2015-10-15

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  10. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    Science.gov (United States)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  11. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  12. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    Science.gov (United States)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular

  13. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    International Nuclear Information System (INIS)

    Solomon, Justin; Samei, Ehsan

    2014-01-01

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  14. Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores

    Directory of Open Access Journals (Sweden)

    M. Döring

    2018-06-01

    Full Text Available Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values (targets to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0

  15. Optimization of the muon reconstruction algorithms for LHCb Run 2

    CERN Document Server

    Aaij, Roel; Dettori, Francesco; Dungs, Kevin; Lopes, Helder; Martinez Santos, Diego; Prisciandaro, Jessica; Sciascia, Barbara; Syropoulos, Vasileios; Stahl, Sascha; Vazquez Gomez, Ricardo

    2017-01-01

    The muon identification algorithm in the LHCb HLT software trigger and offline reconstruction has been revisited in view of the LHC Run 2. This software has undergone a significant refactorisation, resulting in a modularized common code base between the HLT and offline event processing. Because of the latter, the muon identification is now identical in HLT and offline. The HLT1 algorithm sequence has been updated given the new rate and timing constraints. Also, information from the TT subdetector is used in order to reduce ghost tracks and optimize for low $p_T$ muons. The current software is presented here together with performance studies showing improved efficiencies and reduced timing.

  16. The reconstruction algorithm used for ["6"8Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia

    International Nuclear Information System (INIS)

    Krohn, Thomas; Birmes, Anita; Winz, Oliver H.; Drude, Natascha I.; Mottaghy, Felix M.; Behrendt, Florian F.; Verburg, Frederik A.

    2017-01-01

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on ["6"8Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one ["6"8Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of ["6"8Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information. (orig.)

  17. The reconstruction algorithm used for [{sup 68}Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia

    Energy Technology Data Exchange (ETDEWEB)

    Krohn, Thomas [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Ulm University, Department of Nuclear Medicine, Ulm (Germany); Birmes, Anita; Winz, Oliver H.; Drude, Natascha I. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Mottaghy, Felix M. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Maastricht UMC+, Department of Nuclear Medicine, Maastricht (Netherlands); Behrendt, Florian F. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Radiology Institute ' ' Aachen Land' ' , Wuerselen (Germany); Verburg, Frederik A. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); University Hospital Giessen and Marburg, Department of Nuclear Medicine, Marburg (Germany)

    2017-04-15

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [{sup 68}Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [{sup 68}Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [{sup 68}Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information. (orig.)

  18. An Algorithmic Approach to Total Breast Reconstruction with Free Tissue Transfer

    Directory of Open Access Journals (Sweden)

    Seong Cheol Yu

    2013-05-01

    Full Text Available As microvascular techniques continue to improve, perforator flap free tissue transfer is now the gold standard for autologous breast reconstruction. Various options are available for breast reconstruction with autologous tissue. These include the free transverse rectus abdominis myocutaneous (TRAM flap, deep inferior epigastric perforator flap, superficial inferior epigastric artery flap, superior gluteal artery perforator flap, and transverse/vertical upper gracilis flap. In addition, pedicled flaps can be very successful in the right hands and the right patient, such as the pedicled TRAM flap, latissimus dorsi flap, and thoracodorsal artery perforator. Each flap comes with its own advantages and disadvantages related to tissue properties and donor-site morbidity. Currently, the problem is how to determine the most appropriate flap for a particular patient among those potential candidates. Based on a thorough review of the literature and accumulated experiences in the author’s institution, this article provides a logical approach to autologous breast reconstruction. The algorithms presented here can be helpful to customize breast reconstruction to individual patient needs.

  19. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Energy Technology Data Exchange (ETDEWEB)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch [University of Applied Sciences and Arts Northwestern Switzerland FHNW, 5210 Windisch (Switzerland)

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  20. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Science.gov (United States)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  1. Evaluation of imaging protocol for ECT based on CS image reconstruction algorithm

    International Nuclear Information System (INIS)

    Zhou Xiaolin; Yun Mingkai; Cao Xuexiang; Liu Shuangquan; Wang Lu; Huang Xianchao; Wei Long

    2014-01-01

    Single-photon emission computerized tomography and positron emission tomography are essential medical imaging tools, for which the sampling angle number and scan time should be carefully chosen to give a good compromise between image quality and radiopharmaceutical dose. In this study, the image quality of different acquisition protocols was evaluated via varied angle number and count number per angle with Monte Carlo simulation data. It was shown that, when similar imaging counts were used, the factor of acquisition counts was more important than that of the sampling number in emission computerized tomography. To further reduce the activity requirement and the scan duration, an iterative image reconstruction algorithm for limited-view and low-dose tomography based on compressed sensing theory has been developed. The total variation regulation was added to the reconstruction process to improve the signal to noise Ratio and reduce artifacts caused by the limited angle sampling. Maximization of the maximum likelihood of the estimated image and the measured data and minimization of the total variation of the image are alternatively implemented. By using this advanced algorithm, the reconstruction process is able to achieve image quality matching or exceed that of normal scans with only half of the injection radiopharmaceutical dose. (authors)

  2. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    Science.gov (United States)

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (palgorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, palgorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low dose CT colonography

    International Nuclear Information System (INIS)

    Lee, Eun Sun; Kim, Se Hyung; Im, Jong Pil; Kim, Sang Gyun; Shin, Cheong-il; Han, Joon Koo; Choi, Byung Ihn

    2015-01-01

    Highlights: •We assessed the effect of reconstruction algorithms on CAD in ultra-low dose CTC. •30 patients underwent ultra-low dose CTC using 120 and 100 kVp with 10 mAs. •CT was reconstructed with FBP, ASiR and Veo and then, we applied a CAD system. •Per-polyp sensitivity of CAD in ULD CT can be improved with the IR algorithms. •Despite of an increase in the number of FPs with IR, it was still acceptable. -- Abstract: Purpose: To assess the effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low-dose CT colonography (ULD CTC). Materials and methods: IRB approval and informed consents were obtained. Thirty prospectively enrolled patients underwent non-contrast CTC at 120 kVp/10 mAs in supine and 100 kVp/10 mAs in prone positions, followed by same-day colonoscopy. Images were reconstructed with filtered back projection (FBP), 80% adaptive statistical iterative reconstruction (ASIR80), and model-based iterative reconstruction (MBIR). A commercial CAD system was applied and per-polyp sensitivities and numbers of false-positives (FPs) were compared among algorithms. Results: Mean effective radiation dose of CTC was 1.02 mSv. Of 101 polyps detected and removed by colonoscopy, 61 polyps were detected on supine and on prone CTC datasets on consensus unblinded review, resulting in 122 visible polyps (32 polyps <6 mm, 52 6–9.9 mm, and 38 ≥ 10 mm). Per-polyp sensitivity of CAD for all polyps was highest with MBIR (56/122, 45.9%), followed by ASIR80 (54/122, 44.3%) and FBP (43/122, 35.2%), with significant differences between FBP and IR algorithms (P < 0.017). Per-polyp sensitivity for polyps ≥ 10 mm was also higher with MBIR (25/38, 65.8%) and ASIR80 (24/38, 63.2%) than with FBP (20/38, 58.8%), albeit without statistical significance (P > 0.017). Mean number of FPs was significantly different among algorithms (FBP, 1.4; ASIR, 2.1; MBIR, 2.4) (P = 0.011). Conclusion: Although the performance of stand-alone CAD

  4. Effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low dose CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Sun [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Kim, Se Hyung, E-mail: shkim7071@gmail.com [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Im, Jong Pil; Kim, Sang Gyun [Department of Internal Medicine, Seoul National University Hospital (Korea, Republic of); Shin, Cheong-il; Han, Joon Koo; Choi, Byung Ihn [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of)

    2015-04-15

    Highlights: •We assessed the effect of reconstruction algorithms on CAD in ultra-low dose CTC. •30 patients underwent ultra-low dose CTC using 120 and 100 kVp with 10 mAs. •CT was reconstructed with FBP, ASiR and Veo and then, we applied a CAD system. •Per-polyp sensitivity of CAD in ULD CT can be improved with the IR algorithms. •Despite of an increase in the number of FPs with IR, it was still acceptable. -- Abstract: Purpose: To assess the effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low-dose CT colonography (ULD CTC). Materials and methods: IRB approval and informed consents were obtained. Thirty prospectively enrolled patients underwent non-contrast CTC at 120 kVp/10 mAs in supine and 100 kVp/10 mAs in prone positions, followed by same-day colonoscopy. Images were reconstructed with filtered back projection (FBP), 80% adaptive statistical iterative reconstruction (ASIR80), and model-based iterative reconstruction (MBIR). A commercial CAD system was applied and per-polyp sensitivities and numbers of false-positives (FPs) were compared among algorithms. Results: Mean effective radiation dose of CTC was 1.02 mSv. Of 101 polyps detected and removed by colonoscopy, 61 polyps were detected on supine and on prone CTC datasets on consensus unblinded review, resulting in 122 visible polyps (32 polyps <6 mm, 52 6–9.9 mm, and 38 ≥ 10 mm). Per-polyp sensitivity of CAD for all polyps was highest with MBIR (56/122, 45.9%), followed by ASIR80 (54/122, 44.3%) and FBP (43/122, 35.2%), with significant differences between FBP and IR algorithms (P < 0.017). Per-polyp sensitivity for polyps ≥ 10 mm was also higher with MBIR (25/38, 65.8%) and ASIR80 (24/38, 63.2%) than with FBP (20/38, 58.8%), albeit without statistical significance (P > 0.017). Mean number of FPs was significantly different among algorithms (FBP, 1.4; ASIR, 2.1; MBIR, 2.4) (P = 0.011). Conclusion: Although the performance of stand-alone CAD

  5. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    Science.gov (United States)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  6. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    Science.gov (United States)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  7. Principle and Reconstruction Algorithm for Atomic-Resolution Holography

    Science.gov (United States)

    Matsushita, Tomohiro; Muro, Takayuki; Matsui, Fumihiko; Happo, Naohisa; Hosokawa, Shinya; Ohoyama, Kenji; Sato-Tomita, Ayana; Sasaki, Yuji C.; Hayashi, Kouichi

    2018-06-01

    Atomic-resolution holography makes it possible to obtain the three-dimensional (3D) structure around a target atomic site. Translational symmetry of the atomic arrangement of the sample is not necessary, and the 3D atomic image can be measured when the local structure of the target atomic site is oriented. Therefore, 3D local atomic structures such as dopants and adsorbates are observable. Here, the atomic-resolution holography comprising photoelectron holography, X-ray fluorescence holography, neutron holography, and their inverse modes are treated. Although the measurement methods are different, they can be handled with a unified theory. The algorithm for reconstructing 3D atomic images from holograms plays an important role. Although Fourier transform-based methods have been proposed, they require the multiple-energy holograms. In addition, they cannot be directly applied to photoelectron holography because of the phase shift problem. We have developed methods based on the fitting method for reconstructing from single-energy and photoelectron holograms. The developed methods are applicable to all types of atomic-resolution holography.

  8. A system for the 3D reconstruction of retracted-septa PET data using the EM algorithm

    International Nuclear Information System (INIS)

    Johnson, C.A.; Yan, Y.; Carson, R.E.; Martino, R.L.; Daube-Witherspoon, M.E.

    1995-01-01

    The authors have implemented the EM reconstruction algorithm for volume acquisition from current generation retracted-septa PET scanners. Although the software was designed for a GE Advance scanner, it is easily adaptable to other 3D scanners. The reconstruction software was written for an Intel iPSC/860 parallel computer with 128 compute nodes. Running on 32 processors, the algorithm requires approximately 55 minutes per iteration to reconstruct a 128 x 128 x 35 image. No projection data compression schemes or other approximations were used in the implementation. Extensive use of EM system matrix (C ij ) symmetries (including the 8-fold in-plane symmetries, 2-fold axial symmetries, and axial parallel line redundancies) reduces the storage cost by a factor of 188. The parallel algorithm operates on distributed projection data which are decomposed by base-symmetry angles. Symmetry operators copy and index the C ij chord to the form required for the particular symmetry. The use of asynchronous reads, lookup tables, and optimized image indexing improves computational performance

  9. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  10. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  11. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  12. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (pvolumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. A spectral image processing algorithm for evaluating the influence of the illuminants on the reconstructed reflectance

    Science.gov (United States)

    Toadere, Florin

    2017-12-01

    A spectral image processing algorithm that allows the illumination of the scene with different illuminants together with the reconstruction of the scene's reflectance is presented. Color checker spectral image and CIE A (warm light 2700 K), D65 (cold light 6500 K) and Cree TW Series LED T8 (4000 K) are employed for scene illumination. Illuminants used in the simulations have different spectra and, as a result of their illumination, the colors of the scene change. The influence of the illuminants on the reconstruction of the scene's reflectance is estimated. Demonstrative images and reflectance showing the operation of the algorithm are illustrated.

  14. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  15. Reconstructing Unrooted Phylogenetic Trees from Symbolic Ternary Metrics.

    Science.gov (United States)

    Grünewald, Stefan; Long, Yangjing; Wu, Yaokun

    2018-03-09

    Böcker and Dress (Adv Math 138:105-125, 1998) presented a 1-to-1 correspondence between symbolically dated rooted trees and symbolic ultrametrics. We consider the corresponding problem for unrooted trees. More precisely, given a tree T with leaf set X and a proper vertex coloring of its interior vertices, we can map every triple of three different leaves to the color of its median vertex. We characterize all ternary maps that can be obtained in this way in terms of 4- and 5-point conditions, and we show that the corresponding tree and its coloring can be reconstructed from a ternary map that satisfies those conditions. Further, we give an additional condition that characterizes whether the tree is binary, and we describe an algorithm that reconstructs general trees in a bottom-up fashion.

  16. Research on Modified Root-MUSIC Algorithm of DOA Estimation Based on Covariance Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Changgan SHU

    2014-09-01

    Full Text Available In the standard root multiple signal classification algorithm, the performance of direction of arrival estimation will reduce and even lose effect in circumstances that a low signal noise ratio and a small signals interval. By reconstructing and weighting the covariance matrix of received signal, the modified algorithm can provide more accurate estimation results. The computer simulation and performance analysis are given next, which show that under the condition of lower signal noise ratio and stronger correlation between signals, the proposed modified algorithm could provide preferable azimuth estimating performance than the standard method.

  17. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  18. Comparison between different tomographic reconstruction algorithms in nuclear medicine imaging; Comparacion entre distintos algoritmos de reconstruccion tomografica en imagenes de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Llacer Martos, S.; Herraiz Lablanca, M. D.; Puchal Ane, R.

    2011-07-01

    This paper compares the image quality obtained with each of the algorithms is evaluated and its running time, to optimize the choice of algorithm to use taking into account both the quality of the reconstructed image as the time spent on the reconstruction.

  19. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    International Nuclear Information System (INIS)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-01-01

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose 4 and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose 4 at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility

  20. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin, E-mail: khj.snuh@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Park, Chang Min, E-mail: cmpark@radiol.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Song, Yong Sub, E-mail: terasong@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Lee, Sang Min, E-mail: sangmin.lee.md@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Goo, Jin Mo, E-mail: jmgoo@plaza.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of)

    2014-05-15

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose{sup 4} and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose{sup 4} at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility.

  1. A trial to reduce cardiac motion artifact on HR-CT images of the lung with the use of subsecond scan and special cine reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sakai, Fumikazu; Tsuuchi, Yasuhiko; Suzuki, Keiko; Ueno, Keiko; Yamada, Takayuki; Okawa, Tomohiko [Tokyo Women`s Medical Coll. (Japan); Yun, Shen; Horiuchi, Tetsuya; Kimura, Fumiko

    1998-05-01

    We describe our trial to reduce cardiac motion artifacts on HR-CT images caused by cardiac pulsation by combining use of subsecond CT (scan time 0.8 s) and a special cine reconstruction algorithm (cine reconstruction algorithm with 180-degree helical interpolation). Eleven to 51 HR-CT images were reconstructed with the special cine reconstruction algorithm at the pitch of 0.1 (0.08 s) from the data obtained by two to six contigious rotation scans at the same level. Images with the fewest cardiac motion artifacts were selected for evaluation. These images were compared with those reconstructed with a conventional cine reconstruction algorithm and step-by-step scan. In spite of its increased radiation exposure, technical complexity and slight degradation of spatial resolution, our method was useful in reducing cardiac motion artifacts on HR-CT images in regions adjacent to the heart. (author)

  2. Effect of reconstruction algorithm on image quality and identification of ground-glass opacities and partly solid nodules on low-dose thin-section CT: Experimental study using chest phantom

    International Nuclear Information System (INIS)

    Koyama, Hisanobu; Ohno, Yoshiharu; Kono, Atsushi A.; Kusaka, Akiko; Konishi, Minoru; Yoshii, Masaru; Sugimura, Kazuro

    2010-01-01

    Purpose: The purpose of this study was to assess the influence of reconstruction algorithm on identification and image quality of ground-glass opacities (GGOs) and partly solid nodules on low-dose thin-section CT. Materials and methods: A chest CT phantom including simulated GGOs and partly solid nodules was scanned with five different tube currents and reconstructed by using standard (A) and newly developed (B) high-resolution reconstruction algorithms, followed by visually assessment of identification and image quality of GGOs and partly solid nodules by two chest radiologists. Inter-observer agreement, ROC analysis and ANOVA were performed to compare identification and image quality of each data set with those of the standard reference. The standard reference used 120 mA s in conjunction with reconstruction algorithm A. Results: Kappa values (κ) of overall identification and image qualities were substantial or almost perfect (0.60 < κ). Assessment of identification showed that area under the curve of 25 mA reconstructed with reconstruction algorithm A was significantly lower than that of standard reference (p < 0.05), while assessment of image quality indicated that 50 mA s reconstructed with reconstruction algorithm A and 25 mA s reconstructed with both reconstruction algorithms were significantly lower than standard reference (p < 0.05). Conclusion: Reconstruction algorithm may be an important factor for identification and image quality of ground-glass opacities and partly solid nodules on low-dose CT examination.

  3. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  4. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  5. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two

  6. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    Science.gov (United States)

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  7. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsson Svärd, Staffan, E-mail: staffan.jacobsson_svard@physics.uu.se; Holcombe, Scott; Grape, Sophie

    2015-05-21

    A fuel assembly operated in a nuclear power plant typically contains 100–300 fuel rods, depending on fuel type, which become strongly radioactive during irradiation in the reactor core. For operational and security reasons, it is of interest to experimentally deduce rod-wise information from the fuel, preferably by means of non-destructive measurements. The tomographic SPECT technique offers such possibilities through its two-step application; (1) recording the gamma-ray flux distribution around the fuel assembly, and (2) reconstructing the assembly's internal source distribution, based on the recorded radiation field. In this paper, algorithms for performing the latter step and extracting quantitative relative rod-by-rod data are accounted for. As compared to application of SPECT in nuclear medicine, nuclear fuel assemblies present a much more heterogeneous distribution of internal attenuation to gamma radiation than the human body, typically with rods containing pellets of heavy uranium dioxide surrounded by cladding of a zirconium alloy placed in water or air. This inhomogeneity severely complicates the tomographic quantification of the rod-wise relative source content, and the deduction of conclusive data requires detailed modelling of the attenuation to be introduced in the reconstructions. However, as shown in this paper, simplified models may still produce valuable information about the fuel. Here, a set of reconstruction algorithms for SPECT on nuclear fuel assemblies are described and discussed in terms of their quantitative performance for two applications; verification of fuel assemblies' completeness in nuclear safeguards, and rod-wise fuel characterization. It is argued that a request not to base the former assessment on any a priori information brings constraints to which reconstruction methods that may be used in that case, whereas the use of a priori information on geometry and material content enables highly accurate quantitative

  8. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  9. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  10. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  11. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  12. Experimental study of stochastic noise propagation in SPECT images reconstructed using the conjugate gradient algorithm.

    Science.gov (United States)

    Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M

    2003-01-01

    Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.

  13. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    International Nuclear Information System (INIS)

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson–Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package. (paper)

  14. A low-count reconstruction algorithm for Compton-based prompt gamma imaging

    Science.gov (United States)

    Huang, Hsuan-Ming; Liu, Chih-Chieh; Jan, Meei-Ling; Lee, Ming-Wei

    2018-04-01

    The Compton camera is an imaging device which has been proposed to detect prompt gammas (PGs) produced by proton–nuclear interactions within tissue during proton beam irradiation. Compton-based PG imaging has been developed to verify proton ranges because PG rays, particularly characteristic ones, have strong correlations with the distribution of the proton dose. However, accurate image reconstruction from characteristic PGs is challenging because the detector efficiency and resolution are generally low. Our previous study showed that point spread functions can be incorporated into the reconstruction process to improve image resolution. In this study, we proposed a low-count reconstruction algorithm to improve the image quality of a characteristic PG emission by pooling information from other characteristic PG emissions. PGs were simulated from a proton beam irradiated on a water phantom, and a two-stage Compton camera was used for PG detection. The results show that the image quality of the reconstructed characteristic PG emission is improved with our proposed method in contrast to the standard reconstruction method using events from only one characteristic PG emission. For the 4.44 MeV PG rays, both methods can be used to predict the positions of the peak and the distal falloff with a mean accuracy of 2 mm. Moreover, only the proposed method can improve the estimated positions of the peak and the distal falloff of 5.25 MeV PG rays, and a mean accuracy of 2 mm can be reached.

  15. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  16. Convergence of SART + OS + TV iterative reconstruction algorithm for optical CT imaging of gel dosimeters

    International Nuclear Information System (INIS)

    Du, Yi; Yu, Gongyi; Xiang, Xincheng; Wang, Xiangang; De Deene, Yves

    2017-01-01

    Computational simulations are used to investigate the convergence of a hybrid iterative algorithm for optical CT reconstruction, i.e. the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization, or SART+OS+TV for short. The influence of parameter selection to reach convergence, spatial dose gradient integrity, MTF and convergent speed are discussed. It’s shown that the results of SART+OS+TV algorithm converge to the true values without significant bias, and MTF and convergent speed are affected by different parameter sets used for iterative calculation. In conclusion, the performance of the SART+OS+TV depends on parameter selection, which also implies that careful parameter tuning work is required and necessary for proper spatial performance and fast convergence. (paper)

  17. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    Science.gov (United States)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  18. A constrained conjugate gradient algorithm for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Azevedo, S.G.; Goodman, D.M. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Image reconstruction from projections of x-ray, gamma-ray, protons and other penetrating radiation is a well-known problem in a variety of fields, and is commonly referred to as computed tomography (CT). Various analytical and series expansion methods of reconstruction and been used in the past to provide three-dimensional (3D) views of some interior quantity. The difficulties of these approaches lie in the cases where (a) the number of views attainable is limited, (b) the Poisson (or other) uncertainties are significant, (c) quantifiable knowledge of the object is available, but not implementable, or (d) other limitations of the data exist. We have adapted a novel nonlinear optimization procedure developed at LLNL to address limited-data image reconstruction problems. The technique, known as nonlinear least squares with general constraints or constrained conjugate gradients (CCG), has been successfully applied to a number of signal and image processing problems, and is now of great interest to the image reconstruction community. Previous applications of this algorithm to deconvolution problems and x-ray diffraction images for crystallography have shown the great promise.

  19. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  20. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  1. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  2. Comparison of phase unwrapping algorithms for topography reconstruction based on digital speckle pattern interferometry

    Science.gov (United States)

    Li, Yuanbo; Cui, Xiaoqian; Wang, Hongbei; Zhao, Mengge; Ding, Hongbin

    2017-10-01

    Digital speckle pattern interferometry (DSPI) can diagnose the topography evolution in real-time, continuous and non-destructive, and has been considered as a most promising technique for Plasma-Facing Components (PFCs) topography diagnostic under the complicated environment of tokamak. It is important for the study of digital speckle pattern interferometry to enhance speckle patterns and obtain the real topography of the ablated crater. In this paper, two kinds of numerical model based on flood-fill algorithm has been developed to obtain the real profile by unwrapping from the wrapped phase in speckle interference pattern, which can be calculated through four intensity images by means of 4-step phase-shifting technique. During the process of phase unwrapping by means of flood-fill algorithm, since the existence of noise pollution, and other inevitable factors will lead to poor quality of the reconstruction results, this will have an impact on the authenticity of the restored topography. The calculation of the quality parameters was introduced to obtain the quality-map from the wrapped phase map, this work presents two different methods to calculate the quality parameters. Then quality parameters are used to guide the path of flood-fill algorithm, and the pixels with good quality parameters are given priority calculation, so that the quality of speckle interference pattern reconstruction results are improved. According to the comparison between the flood-fill algorithm which is suitable for speckle pattern interferometry and the quality-guided flood-fill algorithm (with two different calculation approaches), the errors which caused by noise pollution and the discontinuous of the strips were successfully reduced.

  3. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  6. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    International Nuclear Information System (INIS)

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  7. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  8. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    International Nuclear Information System (INIS)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  9. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    Science.gov (United States)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the

  10. PCX, Interior-Point Linear Programming Solver

    International Nuclear Information System (INIS)

    Czyzyk, J.

    2004-01-01

    1 - Description of program or function: PCX solves linear programming problems using the Mehrota predictor-corrector interior-point algorithm. PCX can be called as a subroutine or used in stand-alone mode, with data supplied from an MPS file. The software incorporates modules that can be used separately from the linear programming solver, including a pre-solve routine and data structure definitions. 2 - Methods: The Mehrota predictor-corrector method is a primal-dual interior-point method for linear programming. The starting point is determined from a modified least squares heuristic. Linear systems of equations are solved at each interior-point iteration via a sparse Cholesky algorithm native to the code. A pre-solver is incorporated in the code to eliminate inefficiencies in the user's formulation of the problem. 3 - Restriction on the complexity of the problem: There are no size limitations built into the program. The size of problem solved is limited by RAM and swap space on the user's computer

  11. Fully 3D PET image reconstruction using a fourier preconditioned conjugate-gradient algorithm

    International Nuclear Information System (INIS)

    Fessler, J.A.; Ficaro, E.P.

    1996-01-01

    Since the data sizes in fully 3D PET imaging are very large, iterative image reconstruction algorithms must converge in very few iterations to be useful. One can improve the convergence rate of the conjugate-gradient (CG) algorithm by incorporating preconditioning operators that approximate the inverse of the Hessian of the objective function. If the 3D cylindrical PET geometry were not truncated at the ends, then the Hessian of the penalized least-squares objective function would be approximately shift-invariant, i.e. G'G would be nearly block-circulant, where G is the system matrix. We propose a Fourier preconditioner based on this shift-invariant approximation to the Hessian. Results show that this preconditioner significantly accelerates the convergence of the CG algorithm with only a small increase in computation

  12. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  13. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  14. Research on non-uniform strain profile reconstruction along fiber Bragg grating via genetic programming algorithm and interrelated experimental verification

    Science.gov (United States)

    Zheng, Shijie; Zhang, Nan; Xia, Yanjun; Wang, Hongtao

    2014-03-01

    A new heuristic strategy for the non-uniform strain profile reconstruction along Fiber Bragg Gratings is proposed in this paper, which is based on the modified transfer matrix and Genetic Programming(GP) algorithm. The present method uses Genetic Programming to determine the applied strain field as a function of position along the fiber length. The structures that undergo adaptation in genetic programming are hierarchical structures which are different from that of conventional genetic algorithm operating on strings. GP regress the strain profile function which matches the 'measured' spectrum best and makes space resolution of strain reconstruction arbitrarily high, or even infinite. This paper also presents an experimental verification of the reconstruction of non-homogeneous strain fields using GP. The results are compared with numerical calculations of finite element method. Both the simulation examples and experimental results demonstrate that Genetic Programming can effectively reconstruct continuous profile expression along the whole FBG, and greatly improves its computational efficiency and accuracy.

  15. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    Directory of Open Access Journals (Sweden)

    Schöning André

    2016-01-01

    Full Text Available Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  16. Study of a reconstruction algorithm for electrons in the ATLAS experiment in LHC

    International Nuclear Information System (INIS)

    Kerschen, N.

    2006-09-01

    The ATLAS experiment is a general purpose particle physics experiment mainly aimed at the discovery of the origin of mass through the research of the Higgs boson. In order to achieve this, the Large Hadron Collider at CERN will accelerate two proton beams and make them collide at the centre of the experiment. ATLAS will discover new particles through the measurement of their decay products. Electrons are such decay products: they produce an electromagnetic shower in the calorimeter by which they lose all their energy. The calorimeter is divided into cells and the deposited energy is reconstructed using an algorithm to assemble the cells into clusters. The purpose of this thesis is to study a new kind of algorithm adapting the cluster to the shower topology. In order to reconstruct the energy of the initially created electron, the cluster has to be calibrated by taking into account the energy lost in the dead material in front of the calorimeter. Therefore. a Monte-Carlo simulation of the ATLAS detector has been used to correct for effects of response modulation in position and in energy and to optimise the energy resolution as well as the linearity. An analysis of test beam data has been performed to study the behaviour of the algorithm in a more realistic environment. We show that the requirements of the experiment can be met for the linearity and resolution. The improvement of this new algorithm, compared to a fixed sized cluster. is the better recovery of Bremsstrahlung photons emitted by the electron in the material in front of the calorimeter. A Monte-Carlo analysis of the Higgs boson decay in four electrons confirms this result. (author)

  17. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    International Nuclear Information System (INIS)

    Puchner, Stefan B.; Ferencik, Maros; Maurovich-Horvat, Pal; Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu; Kauczor, Hans-Ulrich; Hoffmann, Udo; Schlett, Christopher L.

    2015-01-01

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area 2 : 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  18. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    Science.gov (United States)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  19. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    Science.gov (United States)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  20. SU-G-TeP2-09: Evaluation of the MaxFOV Extended Field of View (EFOV) Reconstruction Algorithm On a GE RT590 CT Scanner

    International Nuclear Information System (INIS)

    Grzetic, S; Weldon, M; Noa, K; Gupta, N

    2016-01-01

    Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of view so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.

  1. A cone-beam reconstruction algorithm using shift-variant filtering and cone-beam backprojection

    International Nuclear Information System (INIS)

    Defrise, M.; Clack, R.

    1994-01-01

    An exact inversion formula written in the form of shift-variant filtered-backprojection (FBP) is given for reconstruction from cone-beam data taken from any orbit satisfying Tuy's sufficiency conditions. The method is based on a result of Grangeat, involving the derivative of the three-dimensional (3-D) Radon transform, but unlike Grangeat's algorithm, no 3D rebinning step is required. Data redundancy, which occurs when several cone-beam projections supply the same values in the Radon domain, is handled using an elegant weighting function and without discarding data. The algorithm is expressed in a convenient cone-beam detector reference frame, and a specific example for the case of a dual orthogonal circular orbit is presented. When the method is applied to a single circular orbit, it is shown to be equivalent to the well-known algorithm of Feldkamp et al

  2. Evaluation of iterative algorithms for tomography image reconstruction: A study using a third generation industrial tomography system

    Energy Technology Data Exchange (ETDEWEB)

    Velo, Alexandre F.; Carvalho, Diego V.; Alvarez, Alexandre G.; Hamada, Margarida M.; Mesquita, Carlos H., E-mail: afvelo@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-07-01

    The greatest impact of the tomography technology currently occurs in medicine. The success is due to the human body presents standardized dimensions with well-established composition. These conditions are not found in industrial objects. In industry, there is much interest in using the tomography in order to know the inner of (1) the manufactured industrial objects or (2) the machines and their means of production. In these cases, the purpose of the tomography is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and analyzing the quality of the means of production. This scan system is a non-destructive, efficient and fast method for providing sectional images of industrial objects and is able to show the dynamic processes and the dispersion of the materials structures within these objects. In this context, it is important that the reconstructed image presents a great spatial resolution with a satisfactory temporal resolution. Thus the algorithm to reconstruct the images has to meet these requirements. This work consists in the analysis of three different iterative algorithm methods, such Maximum Likelihood Estimation Method (MLEM), Maximum Likelihood Transmitted Method (MLTR) and Simultaneous Iterative Reconstruction Method (SIRT. The analysis consists on measurement of the contrast to noise ratio (CNR), the root mean square error (RMSE) and the Modulation Transfer Function (MTF), to know which algorithm fits better the conditions in order to optimize system. The algorithms and the image quality analysis were performed by the Matlab® 2013b. (author)

  3. Evaluation of iterative algorithms for tomography image reconstruction: A study using a third generation industrial tomography system

    International Nuclear Information System (INIS)

    Velo, Alexandre F.; Carvalho, Diego V.; Alvarez, Alexandre G.; Hamada, Margarida M.; Mesquita, Carlos H.

    2017-01-01

    The greatest impact of the tomography technology currently occurs in medicine. The success is due to the human body presents standardized dimensions with well-established composition. These conditions are not found in industrial objects. In industry, there is much interest in using the tomography in order to know the inner of (1) the manufactured industrial objects or (2) the machines and their means of production. In these cases, the purpose of the tomography is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and analyzing the quality of the means of production. This scan system is a non-destructive, efficient and fast method for providing sectional images of industrial objects and is able to show the dynamic processes and the dispersion of the materials structures within these objects. In this context, it is important that the reconstructed image presents a great spatial resolution with a satisfactory temporal resolution. Thus the algorithm to reconstruct the images has to meet these requirements. This work consists in the analysis of three different iterative algorithm methods, such Maximum Likelihood Estimation Method (MLEM), Maximum Likelihood Transmitted Method (MLTR) and Simultaneous Iterative Reconstruction Method (SIRT. The analysis consists on measurement of the contrast to noise ratio (CNR), the root mean square error (RMSE) and the Modulation Transfer Function (MTF), to know which algorithm fits better the conditions in order to optimize system. The algorithms and the image quality analysis were performed by the Matlab® 2013b. (author)

  4. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    Science.gov (United States)

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  5. Fast cross-projection algorithm for reconstruction of seeds in prostate brachytherapy

    International Nuclear Information System (INIS)

    Narayanan, Sreeram; Cho, Paul S.; Marks, Robert J. II

    2002-01-01

    A fast method of seed matching and reconstruction in prostrate brachytherapy is proposed. Previous approaches have required all seeds to be matched with all other seeds in other projections. The fast cross-projection algorithm for the reconstruction of seeds (Fast-CARS) allows for matching of a given seed with a subset of seeds in other projections. This subset lies in a proximal region centered about the projection of a line, connecting the seed to its source, onto other projection planes. The proposed technique permits a significant reduction in computational overhead, as measured by the required number of matching tests. The number of multiplications and additions is also vastly reduced at no trade-off in accuracy. Because of its speed, Fast-CARS can be used in applications requiring real-time performance such as intraoperative dosimetry of prostate brachytherapy. Furthermore, the proposed method makes practical the use of a larger number of views as opposed to previous techniques limited to a maximum use of three views

  6. Virtual patient 3D dose reconstruction using in air EPID measurements and a back-projection algorithm for IMRT and VMAT treatments.

    Science.gov (United States)

    Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton

    2017-05-01

    At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. An Adaptive Threshold Image Reconstruction Algorithm of Oil-Water Two-Phase Flow in Electrical Capacitance Tomography System

    International Nuclear Information System (INIS)

    Qin, M; Chen, D Y; Wang, L L; Yu, X Y

    2006-01-01

    The subject investigated in this paper is the ECT system of 8-electrode oil-water two-phase flow, and the measuring principle is analysed. In ART image-reconstruction algorithm, an adaptive threshold image reconstruction is presented to improve quality of image reconstruction and calculating accuracy of concentration, and generally the measurement error is about 1%. Such method can well solve many defects that other measurement methods may have, such as slow speed, high cost, and poor security and so on. Therefore, it offers a new method for the concentration measurement of oil-water two-phase flow

  8. 3-D image reconstruction in radiology

    International Nuclear Information System (INIS)

    Grangeat, P.

    1999-01-01

    In this course, we present highlights on fully 3-D image reconstruction algorithms used in 3-D X-ray Computed Tomography (3-D-CT) and 3-D Rotational Radiography (3-D-RR). We first consider the case of spiral CT with a one-row detector. Starting from the 2-D fan-beam inversion formula for a circular trajectory, we introduce spiral CT 3-D image reconstruction algorithm using axial interpolation for each transverse slice. In order to improve the X-ray detection efficiency and to speed the acquisition process, the future is to use multi-row detectors associated with small angle cone-beam geometry. The generalization of the 2-D fan-beam image reconstruction algorithm to cone beam defined direct inversion formula referred as Feldkamp's algorithm for a circular trajectory and Wang's algorithm for a spiral trajectory. However, large area detectors does exist such as Radiological Image Intensifiers or in a near future solid state detectors. To get a larger zoom effect, it defines a cone-beam geometry associated with a large aperture angle. For this case, we introduce indirect image reconstruction algorithm by plane re-binning in the Radon domain. We will present some results from a prototype MORPHOMETER device using the RADON reconstruction software. Lastly, we consider the special case of 3-D Rotational Digital Subtraction Angiography with a restricted number of views. We introduce constraint optimization algorithm using quadratic, entropic or half-quadratic constraints. Generalized ART (Algebraic Reconstruction Technique) iterative reconstruction algorithm can be derived from the Bregman algorithm. We present reconstructed vascular trees from a prototype MORPHOMETER device. (author)

  9. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  10. Advancements to the planogram frequency–distance rebinning algorithm

    International Nuclear Information System (INIS)

    Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  11. Simulation of 4-turn algorithms for reconstructing lattice optic functions from orbit measurements

    International Nuclear Information System (INIS)

    Koscielniak, S.; Iliev, A.

    1994-06-01

    We describe algorithms for reconstructing tune, closed-orbit, beta-function and phase advance from four individual turns of beam orbit acquisition data, under the assumption of coherent, almost linear and uncoupled betatron oscillations. To estimate the beta-function at, and phase advance between, position monitors, we require at least one anchor location consisting of two monitors separated by a drift. The algorithms were submitted to a Monte Carlo analysis to find the likely measurement accuracy of the optics functions in the KAON Factory Booster ring racetrack lattice, assuming beam position monitors with surveying and reading errors, and assuming an imperfect lattice with gradient and surveying errors. Some of the results of this study are reported. (author)

  12. The complexity of interior point methods for solving discounted turn-based stochastic games

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus

    2013-01-01

    for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....

  13. Dendroclimatic transfer functions revisited: Little Ice Age and Medieval Warm Period summer temperatures reconstructed using artificial neural networks and linear algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Helama, S.; Holopainen, J.; Eronen, M. [Department of Geology, University of Helsinki, (Finland); Makarenko, N.G. [Russian Academy of Sciences, St. Petersburg (Russian Federation). Pulkovo Astronomical Observatory; Karimova, L.M.; Kruglun, O.A. [Institute of Mathematics, Almaty (Kazakhstan); Timonen, M. [Finnish Forest Research Institute, Rovaniemi Research Unit (Finland); Merilaeinen, J. [SAIMA Unit of the Savonlinna Department of Teacher Education, University of Joensuu (Finland)

    2009-07-01

    Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be 'transferred' into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR), linear scaling (LSC) and artificial neural networks (ANN, nonlinear algorithm) were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden). In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802-1998). LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802-1801). A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931-1180) to the Little Ice Age (AD 1601-1850), was evident in all the models. This decline was approx. 0.5 C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5 C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the calibration

  14. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  15. A regularized relaxed ordered subset list-mode reconstruction algorithm and its preliminary application to undersampling PET imaging

    International Nuclear Information System (INIS)

    Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng

    2015-01-01

    List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)

  16. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  17. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  18. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    Energy Technology Data Exchange (ETDEWEB)

    Rescigno, R., E-mail: regina.rescigno@iphc.cnrs.fr [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Finck, Ch.; Juliani, D. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Spiriti, E. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di Frascati (Italy); Istituto Nazionale di Fisica Nucleare - Sezione di Roma 3 (Italy); Baudot, J. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Abou-Haidar, Z. [CNA, Sevilla (Spain); Agodi, C. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Alvarez, M.A.G. [CNA, Sevilla (Spain); Aumann, T. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); Battistoni, G. [Istituto Nazionale di Fisica Nucleare - Sezione di Milano (Italy); Bocci, A. [CNA, Sevilla (Spain); Böhlen, T.T. [European Organization for Nuclear Research CERN, Geneva (Switzerland); Medical Radiation Physics, Karolinska Institutet and Stockholm University, Stockholm (Sweden); Boudard, A. [CEA-Saclay, IRFU/SPhN, Gif sur Yvette Cedex (France); Brunetti, A.; Carpinelli, M. [Istituto Nazionale di Fisica Nucleare - Sezione di Cagliari (Italy); Università di Sassari (Italy); Cirrone, G.A.P. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Cortes-Giraldo, M.A. [Departamento de Fisica Atomica, Molecular y Nuclear, University of Sevilla, 41080-Sevilla (Spain); Cuttone, G.; De Napoli, M. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Durante, M. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); and others

    2014-12-11

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  19. Optimising Aesthetic Reconstruction of Scalp Soft Tissue by an Algorithm Based on Defect Size and Location.

    Science.gov (United States)

    Ooi, Adrian Sh; Kanapathy, Muholan; Ong, Yee Siang; Tan, Kok Chai; Tan, Bien Keem

    2015-11-01

    Scalp soft tissue defects are common and result from a variety of causes. Reconstructive methods should maximise cosmetic outcomes by maintaining hair-bearing tissue and aesthetic hairlines. This article outlines an algorithm based on a diverse clinical case series to optimise scalp soft tissue coverage. A retrospective analysis of scalp soft tissue reconstruction cases performed at the Singapore General Hospital between January 2004 and December 2013 was conducted. Forty-one patients were included in this study. The majority of defects aesthetic outcome while minimising complications and repeat procedures.

  20. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    Science.gov (United States)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  1. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    International Nuclear Information System (INIS)

    Guo, J.; Buecherl, T.; Zou, Y.; Guo, Z.

    2011-01-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  2. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    Energy Technology Data Exchange (ETDEWEB)

    Guo, J. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Buecherl, T. [Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Zou, Y., E-mail: zouyubin@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Guo, Z. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China)

    2011-09-21

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  3. Non-Interior Continuation Method for Solving the Monotone Semidefinite Complementarity Problem

    International Nuclear Information System (INIS)

    Huang, Z.H.; Han, J.

    2003-01-01

    Recently, Chen and Tseng extended non-interior continuation smoothing methods for solving linear/ nonlinear complementarity problems to semidefinite complementarity problems (SDCP). In this paper we propose a non-interior continuation method for solving the monotone SDCP based on the smoothed Fischer-Burmeister function, which is shown to be globally linearly and locally quadratically convergent under suitable assumptions. Our algorithm needs at most to solve a linear system of equations at each iteration. In addition, in our analysis on global linear convergence of the algorithm, we need not use the assumption that the Frechet derivative of the function involved in the SDCP is Lipschitz continuous. For non-interior continuation/ smoothing methods for solving the nonlinear complementarity problem, such an assumption has been used widely in the literature in order to achieve global linear convergence results of the algorithms

  4. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    Science.gov (United States)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  5. An objective algorithm for reconstructing the three-dimensional ocean temperature field based on Argo profiles and SST data

    Science.gov (United States)

    Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang

    2017-12-01

    While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.

  6. Structural algorithm to reservoir reconstruction using passive seismic data (synthetic example)

    Energy Technology Data Exchange (ETDEWEB)

    Smaglichenko, Tatyana A.; Volodin, Igor A.; Lukyanitsa, Andrei A.; Smaglichenko, Alexander V.; Sayankina, Maria K. [Oil and Gas Research Institute, Russian Academy of Science, Gubkina str.3, 119333, Moscow (Russian Federation); Faculty of Computational Mathematics and Cybernetics, M.V. Lomonosov Moscow State University, Leninskie gory, 1, str.52,Second Teaching Building.119991 Moscow (Russian Federation); Shmidt' s Institute of Physics of the Earth, Russian Academy of Science, Bolshaya Gruzinskaya str. 10, str.1, 123995 Moscow (Russian Federation); Oil and Gas Research Institute, Russian Academy of Science, Gubkina str.3, 119333, Moscow (Russian Federation)

    2012-09-26

    Using of passive seismic observations to detect a reservoir is a new direction of prospecting and exploration of hydrocarbons. In order to identify thin reservoir model we applied the modification of Gaussian elimination method in conditions of incomplete synthetic data. Because of the singularity of a matrix conventional method does not work. Therefore structural algorithm has been developed by analyzing the given model as a complex model. Numerical results demonstrate of its advantage compared with usual way of solution. We conclude that the gas reservoir is reconstructed by retrieving of the image of encasing shale beneath it.

  7. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    Science.gov (United States)

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  8. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    Science.gov (United States)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  9. An interior-point method for the Cartesian P*(k-linear complementarity problem over symmetric cones

    Directory of Open Access Journals (Sweden)

    B Kheirfam

    2014-06-01

    Full Text Available A novel primal-dual path-following interior-point algorithm for the Cartesian P*(k-linear complementarity problem over symmetric cones is presented. The algorithm is based on a reformulation of the central path for finding the search directions. For a full Nesterov-Todd step feasible interior-point algorithm based on the new search directions, the complexity bound of the algorithm with small-update approach is the best-available bound.

  10. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms.

    Science.gov (United States)

    Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G

    2017-11-01

    Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV

  11. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope

    NARCIS (Netherlands)

    Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Avgitas, T.; Baret, B.; Barrios-Martí, J.; Basa, S.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M.C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Chiarusi, T.; Circella, M.; Coelho, C.O.A.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Deschamps, A.; De Bonis, G.; Distefano, C.; Di Palma, I.; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; El Bojaddaini, I.; Elsässer, D.; Enzenhofer, A.; Felis, I.; Folger, F.; Fusco, L.A.; Galata, S.; Gay, P.; Giordano, V.; Glotin, H.; Grégoire, T.; Gracia-Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A.J.; Hello, Y.; Hernandez-Rey, J.J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C.W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefevre, D.; Leonora, E.; Lotze, M.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martinez-Mora, J.A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Nezri, E.; Organokov, M.; Pavalas, G.E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sanchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D.F.E.; Sanguineti, M.; Sapienza, P.; Schussler, F.; Sieger, C.; Spurio, M.; Stolarczyk, T.; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzoca, A.; Wilms, J.; Zornoza, J.D.; Zuniga, J.

    2017-01-01

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6∘ for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with

  12. Real time equilibrium reconstruction algorithm in EAST tokamak

    International Nuclear Information System (INIS)

    Wang Huazhong; Luo Jiarong; Huang Qinchao

    2004-01-01

    The EAST (HT-7U) superconducting tokamak is a national project of China on fusion research, with a capability of long-pulse (∼1000 s) operation. In order to realize a long-duration steady-state operation of EAST, some significant capability of real-time control is required. It would be very crucial to obtain the current profile parameters and the plasma shapes in real time by a flexible control system. As those discharge parameters cannot be directly measured, so a current profile consistent with the magnetohydrodynamic equilibrium should be evaluated from external magnetic measurements, based on a linearized iterative least square method, which can meet the requirements of the measurements. The arithmetic that the EFIT (equilibrium fitting code) is used for reference will be given in this paper and the computational efforts are reduced by parameterizing the current profile linearly in terms of a number of physical parameters. In order to introduce this reconstruction algorithm clearly, the main hardware design will be listed also. (authors)

  13. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    International Nuclear Information System (INIS)

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  14. Data-parallel tomographic reconstruction : A comparison of filtered backprojection and direct Fourier reconstruction

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Westenberg, M.A

    1998-01-01

    We consider the parallelization of two standard 2D reconstruction algorithms, filtered backprojection and direct Fourier reconstruction, using the data-parallel programming style. The algorithms are implemented on a Connection Machine CM-5 with 16 processors and a peak performance of 2 Gflop/s.

  15. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  16. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  17. Implementation of a cone-beam reconstruction algorithm for the single-circle source orbit with embedded misalignment correction using homogeneous coordinates

    International Nuclear Information System (INIS)

    Karolczak, Marek; Schaller, Stefan; Engelke, Klaus; Lutz, Andreas; Taubenreuther, Ulrike; Wiesent, Karl; Kalender, Willi

    2001-01-01

    We present an efficient implementation of an approximate cone-beam image reconstruction algorithm for application in tomography, which accounts for scanner mechanical misalignment. The implementation is based on the algorithm proposed by Feldkamp et al. [J. Opt. Soc. Am. A 6, 612-619 (1984)] and is directed at circular scan paths. The algorithm has been developed for the purpose of reconstructing volume data from projections acquired in an experimental x-ray microtomography (μCT) scanner [Engelke et al., Der Radiologe 39, 203-212 (1999)]. To mathematically model misalignment we use matrix notation with homogeneous coordinates to describe the scanner geometry, its misalignment, and the acquisition process. For convenience analysis is carried out for x-ray CT scanners, but it is applicable to any tomographic modality, where two-dimensional projection acquisition in cone beam geometry takes place, e.g., single photon emission computerized tomography. We derive an algorithm assuming misalignment errors to be small enough to weight and filter original projections and to embed compensation for misalignment in the backprojection. We verify the algorithm on simulations of virtual phantoms and scans of a physical multidisk (Defrise) phantom

  18. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  19. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    Science.gov (United States)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  20. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  1. Tomographic reconstruction of binary fields

    International Nuclear Information System (INIS)

    Roux, Stéphane; Leclerc, Hugo; Hild, François

    2012-01-01

    A novel algorithm is proposed for reconstructing binary images from their projection along a set of different orientations. Based on a nonlinear transformation of the projection data, classical back-projection procedures can be used iteratively to converge to the sought image. A multiscale implementation allows for a faster convergence. The algorithm is tested on images up to 1 Mb definition, and an error free reconstruction is achieved with a very limited number of projection data, saving a factor of about 100 on the number of projections required for classical reconstruction algorithms.

  2. Parallel performances of three 3D reconstruction methods on MIMD computers: Feldkamp, block ART and SIRT algorithms

    International Nuclear Information System (INIS)

    Laurent, C.; Chassery, J.M.; Peyrin, F.; Girerd, C.

    1996-01-01

    This paper deals with the parallel implementations of reconstruction methods in 3D tomography. 3D tomography requires voluminous data and long computation times. Parallel computing, on MIMD computers, seems to be a good approach to manage this problem. In this study, we present the different steps of the parallelization on an abstract parallel computer. Depending on the method, we use two main approaches to parallelize the algorithms: the local approach and the global approach. Experimental results on MIMD computers are presented. Two 3D images reconstructed from realistic data are showed

  3. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    Science.gov (United States)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  4. Image reconstruction for an electrical capacitance tomography system based on a least-squares support vector machine and a self-adaptive particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang

    2011-01-01

    The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality

  5. 3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion.

    Science.gov (United States)

    Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang

    2017-07-06

    We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or simple parametric surfaces. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

  6. Support-free interior carving for 3D printing

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2017-03-01

    Full Text Available Recent interior carving methods for functional design necessitate a cumbersome cut-and-glue process in fabrication. We propose a method to generate interior voids which not only satisfy the functional purposes but are also support-free during the 3D printing process. We introduce a support-free unit structure for voxelization and derive the wall thicknesses parametrization for continuous optimization. We also design a discrete dithering algorithm to ensure the printability of ghost voxels. The interior voids are iteratively carved by alternating the optimization and dithering. We apply our method to optimize the static and rotational stability, and print various results to evaluate the efficacy. Keywords: Interior carving, Support-free, Voxels dithering, Shape optimization, 3D printing

  7. Time-of-flight PET image reconstruction using origin ensembles

    Science.gov (United States)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  8. Comparison of low-contrast detectability between two CT reconstruction algorithms using voxel-based 3D printed textured phantoms.

    Science.gov (United States)

    Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan

    2016-12-01

    To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels

  9. Reconstruction of convex bodies from surface tensors

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus

    We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. The second algorithm uses harmonic intrinsic...... volumes which are certain values of the surface tensors and allows for noisy measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based...

  10. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  11. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby...... the scanning time as well as the amount of interaction between the AFM probe and the specimen. It can easily be applied on conventional AFM hardware. Due to undersampling, it is then necessary to further process the acquired image in order to reconstruct an approximation of the image. Based on real AFM cell...... images, our simulations reveal that using a simple raster scanning pattern in combination with conventional image interpolation performs very well. Moreover, this combination enables a reduction by a factor 10 of the scanning time while retaining an average reconstruction quality around 36 dB PSNR...

  12. Development of tomographic reconstruction algorithms for the PIXE analysis of biological samples

    International Nuclear Information System (INIS)

    Nguyen, D.T.

    2008-05-01

    The development of 3-dimensional microscopy techniques offering a spatial resolution of 1 μm or less has opened a large field of investigation in Cell Biology. Amongst them, an interesting advantage of ion beam micro-tomography is its ability to give quantitative results in terms of local concentrations in a direct way, using Particle Induced X-ray Emission (PIXET) combined to Scanning Transmission Ion Microscopy (STIMT) Tomography. After a brief introduction of existing reconstruction techniques, we present the principle of the DISRA code, the most complete written so far, which is the basis of the present work. We have modified and extended the DISRA algorithm by considering the specific aspects of biologic specimens. Moreover, correction procedures were added in the code to reduce noise in the tomograms. For portability purpose, a Windows graphic interface was designed to easily enter and modify experimental parameters used in the reconstruction, and control the several steps of data reduction. Results of STIMT and PIXET experiments on reference specimens and on human cancer cells will be also presented. (author)

  13. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    Science.gov (United States)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  14. Application of the FDK algorithm for multi-slice tomographic image reconstruction; Aplicacao do algoritmo FDK para a reconstrucao de imagens tomograficas multicortes

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Paulo Roberto, E-mail: pcosta@if.usp.b [Universidade de Sao Paulo (IFUSP), SP (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear; Araujo, Ericky Caldas de Almeida [Fine Image Technology, Sao Paulo, SP (Brazil)

    2010-08-15

    This work consisted on the study and application of the FDK (Feldkamp- Davis-Kress) algorithm for tomographic image reconstruction using cone-beam geometry, resulting on the implementation of an adapted multi-slice computed tomography system. For the acquisition of the projections, a rotating platform coupled to a goniometer, an X-ray equipment and a digital image detector charge-coupled device type were used. The FDK algorithm was implemented on a computer with a Pentium{sup R} XEON{sup TM} 3.0 processor, which was used for the reconstruction process. Initially, the original FDK algorithm was applied considering only the ideal physical conditions in the measurement process. Then some artifacts corrections related to the projections measurement process were incorporated. The implemented MSCT system was calibrated. A specially designed and manufactured object with a known linear attenuation coefficient distribution ({mu}(r)) was used for this purpose. Finally, the implemented MSCT system was used for multi-slice tomographic reconstruction of an inhomogeneous object, whose distribution {mu}(r) was unknown. Some aspects of the reconstructed images were analyzed to assess the robustness and reproducibility of the system. During the system calibration, a linear relationship between CT number and linear attenuation coefficients of materials was verified, which validate the application of the implemented multi-slice tomographic system for the characterization of linear attenuation coefficients of distinct several objects. (author)

  15. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    International Nuclear Information System (INIS)

    Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K

    2015-01-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)

  16. Direct cone beam SPECT reconstruction with camera tilt

    International Nuclear Information System (INIS)

    Jianying Li; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.; Zongjian Cao; Tsui, B.M.W.

    1993-01-01

    A filtered backprojection (FBP) algorithm is derived to perform cone beam (CB) single-photon emission computed tomography (SPECT) reconstruction with camera tilt using circular orbits. This algorithm reconstructs the tilted angle CB projection data directly by incorporating the tilt angle into it. When the tilt angle becomes zero, this algorithm reduces to that of Feldkamp. Experimentally acquired phantom studies using both a two-point source and the three-dimensional Hoffman brain phantom have been performed. The transaxial tilted cone beam brain images and profiles obtained using the new algorithm are compared with those without camera tilt. For those slices which have approximately the same distance from the detector in both tilt and non-tilt set-ups, the two transaxial reconstructions have similar profiles. The two-point source images reconstructed from this new algorithm and the tilted cone beam brain images are also compared with those reconstructed from the existing tilted cone beam algorithm. (author)

  17. A fast image reconstruction technique based on ART

    International Nuclear Information System (INIS)

    Zhang Shunli; Zhang Dinghua; Wang Kai; Huang Kuidong; Li Weibin

    2007-01-01

    Algebraic Reconstruction Technique (ART) is an iterative method for image reconstruction. Improving its reconstruction speed has been one of the important researching aspects of ART. For the simplified weight coefficients reconstruction model of ART, a fast grid traverse algorithm is proposed, which can determine the grid index by simple operations such as addition, subtraction and comparison. Since the weight coefficients are calculated at real time during iteration, large amount of storage is saved and the reconstruction speed is greatly increased. Experimental results show that the new algorithm is very effective and the reconstruction speed is improved about 10 times compared with the traditional algorithm. (authors)

  18. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  19. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  20. Performance of algorithms that reconstruct missing transverse momentum in $\\sqrt{s}$= 8 TeV proton--proton collisions in the ATLAS detector

    CERN Document Server

    Aad, G.; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Aben, Rosemarie; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Agricola, Johannes; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Baines, John; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Janna Katharina; Belanger-Champagne, Camille; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Beringer, Jürg; Bernard, Clare; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bevan, Adrian John; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Biedermann, Dustin; Biesuz, Nicolo Vladi; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bjergaard, David Martin; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blanco, Jacobo Ezequiel; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Boerner, Daniela; Bogaerts, Joannes Andreas; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabrera Urbán, Susana; Caforio, Davide; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Caloba, Luiz; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Casper, David William; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerda Alberich, Leonor; Cerio, Benjamin; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, Dave; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Colasurdo, Luca; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Crawley, Samuel Joseph; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cúth, Jakub; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey Rogers; Dang, Nguyen Phuong; Daniells, Andrew Christopher; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davison, Peter; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Dolgoshein, Boris; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Edson, William; Edwards, Nicholas Charles; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellajosyula, Venugopal; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Ennis, Joseph Stanford; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Christian; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Feremenga, Last; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Gareth Thomas; Fletcher, Gregory; Fletcher, Rob Roy MacGregor; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Forcolin, Giulio Tiziano; Formica, Andrea; Forti, Alessandra; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gecse, Zoltan; Gee, Norman; Geich-Gimbel, Christoph; Geisler, Manuel Patrice; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghasemi, Sara; Ghazlane, Hamid; Giacobbe, Benedetto; Giagu, Stefano; Giannetti, Paola; Gibbard, Bruce; Gibson, Stephen; Gignac, Matthew; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Gozani, Eitan; Graber, Lars; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Grafström, Per; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Grohs, Johannes Philipp; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Yicheng; Gupta, Shaun; Gustavino, Giuliano; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Haefner, Petra; Hageböck, Stephan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Hall, David; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Haney, Bijan; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Makoto; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hinman, Rachel Reisner; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohlfeld, Marc; Hohn, David; Holmes, Tova Ray; Homann, Michael; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Catherine; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Qipeng; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Ince, Tayfun; Introzzi, Gianluca; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivarsson, Jenny; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Matthew; Jackson, Paul; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansky, Roland; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiggins, Stephen; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Johansson, Per; Johns, Kenneth; Johnson, William Joseph; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Juste Rozas, Aurelio; Köhler, Markus Konrad; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneti, Steven; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kempster, Jacob Julian; Kentaro, Kawade; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharlamov, Alexey; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kivernyk, Oleh; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Knapik, Joanna; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Koi, Tatsumi; Kolanoski, Hermann; Kolb, Mathis; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lambourne, Luke; Lammers, Sabine; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Lazovich, Tomo; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leisos, Antonios; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Adrian; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Haifeng; Li, Ho Ling; Li, Lei; Li, Liang; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Lindquist, Brian Edward; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Hao; Liu, Hongbin; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loew, Kevin Michael; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Looper, Kristina Anne; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Haonan; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Mansoulie, Bruno; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Mapelli, Livio; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mochizuki, Kazuya; Mohapatra, Soumya; Mohr, Wolfgang; Molander, Simon; Moles-Valls, Regina; Monden, Ryutaro; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morvaj, Ljiljana; Mosidze, Maia; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Myagkov, Alexey; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Nef, Pascal Daniel; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Nielsen, Jason; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Palma, Alberto; Panagiotopoulou, Evgenia; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pin, Arnaud Willy J; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pires, Sylvestre; Pirumov, Hayk; Pitt, Michael; Pizio, Caterina; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Pranko, Aliaksandr; Prell, Soeren; Price, Darren; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puddu, Daniele; Puldon, David; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quarrie, David; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reisin, Hernan; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodina, Yulia; Rodriguez Perez, Andrea; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Ryzhov, Andrey; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sandbach, Ruth Laura; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sannino, Mario; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sasaki, Yuichi; Sato, Koji; Sauvage, Gilles; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitt, Stefan; Schmitz, Simon; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schwegler, Philipp; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyedruhollah; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sidebo, Per Edvin; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simon, Manuel; Simoniello, Rosa; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Song, Hong Ye; Soni, Nitesh; Sood, Alexander; Sopczak, Andre; Sopko, Vit; Sorin, Veronica; Sosa, David; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; St Denis, Richard Dante; Stabile, Alberto; Stahlman, Jonathan; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramaniam, Rajivalochan; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Shuji; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarem, Shlomit; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira-Dias, Pedro; Temming, Kim Katrin; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Ray; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tseng, Jeffrey; Tsiareshka, Pavel; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsui, Ka Ming; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turgeman, Daniel; Turra, Ruggero; Turvey, Andrew John; Tuts, Michael; Tylmad, Maja; Tyndel, Mike; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vivarelli, Iacopo; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Chao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Tingting; Wang, Xiaoxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; White, Sebastian; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winter, Benedict Tobias; Wittgen, Matthias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yakabe, Ryota; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yen, Andy L; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zeng, Jian Cong; Zeng, Qi; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Mingliang; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zwalinski, Lukasz

    2017-04-13

    The reconstruction and calibration algorithms used to calculate missing transverse momentum ($E_{\\rm T}^{\\rm miss}$) with the ATLAS detector exploit energy deposits in the calorimeter and tracks reconstructed in the inner detector as well as the muon spectrometer. Various strategies are used to suppress effects arising from additional proton--proton interactions, called pileup, concurrent with the hard-scatter processes. Tracking information is used to distinguish contributions from the pileup interactions using their vertex separation along the beam axis. The performance of the $E_{\\rm T}^{\\rm miss}$ reconstruction algorithms, especially with respect to the amount of pileup, is evaluated using data collected in proton--proton collisions at a centre-of-mass energy of 8 TeV during 2012, and results are shown for a data sample corresponding to an integrated luminosity of 20.3 fb$^{-1}$. The results of simulation modelling of $E_{\\rm T}^{\\rm miss}$ in events containing a $Z$ bosondecaying to two charged leptons ...

  1. Reconstruction of core axial power shapes using the alternating conditional expectation algorithm

    International Nuclear Information System (INIS)

    Lee, Eun Ki; Kim, Yong Hee; Cha, Kune Ho; Park, Moon Ghu

    1999-01-01

    We have introduced the alternating conditional expectation (ACE) algorithm in reconstructing 20-node axial core power shapes from five-level in-core detector powers. The core design code, Reactor Operation and Control Simulation (ROCS), calculates 3-dimensional power distributions for various core states, and the reference core-averaged axial power shapes and corresponding simulated detector powers are utilized to synthesize the axial power shape. By using the ACE algorithm, the optimal relationship between a dependent variable, the plane power, and independent variables, five detector powers, is determined without any preprocessing. A total of ∼3490 data sets per each cycle of YongGwang Nuclear (YGN) power plant units 3 and 4 is used for the regression. Continuous analytic function corresponding to each optimal transformation is calculated by simple regression model. The reconstructed axial power shapes of ∼21,200 cases are compared to the original ROCS axial power shapes. Also, to test the validity and accuracy of the new method, its performance is compared with that of the Fourier fitting method (FFM), a typical method of the deterministic approach. For a total of 21,204 data cases, the averages of root mean square (rms) error, axial peak error (ΔF z ), and axial shape index error (ΔASI) of new method are calculated as 0.81%, 0.51% and 0.00204, while those of FFM are 2.29%, 2.37% and 0.00264, respectively. We also evaluated the wide range of axial power profiles from the xenon-oscillation. The results show that the newly developed method is far superior to FFM; average rms and axial peak error are just ∼35 and ∼20% of those of FFM, respectively

  2. Variability and accuracy of coronary CT angiography including use of iterative reconstruction algorithms for plaque burden assessment as compared with intravascular ultrasound - an ex vivo study

    Energy Technology Data Exchange (ETDEWEB)

    Stolzmann, Paul [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Schlett, Christopher L.; Maurovich-Horvat, Pal; Scheffel, Hans; Engel, Leif-Christopher; Karolyi, Mihaly; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); Maehara, Akiko; Ma, Shixin; Mintz, Gary S. [Columbia University Medical Center, Cardiovascular Research Foundation, New York, NY (United States)

    2012-10-15

    To systematically assess inter-technique and inter-/intra-reader variability of coronary CT angiography (CTA) to measure plaque burden compared with intravascular ultrasound (IVUS) and to determine whether iterative reconstruction algorithms affect variability. IVUS and CTA data were acquired from nine human coronary arteries ex vivo. CT images were reconstructed using filtered back projection (FBPR) and iterative reconstruction algorithms: adaptive-statistical (ASIR) and model-based (MBIR). After co-registration of 284 cross-sections between IVUS and CTA, two readers manually delineated the cross-sectional plaque area in all images presented in random order. Average plaque burden by IVUS was 63.7 {+-} 10.7% and correlated significantly with all CTA measurements (r = 0.45-0.52; P < 0.001), while CTA overestimated the burden by 10 {+-} 10%. There were no significant differences among FBPR, ASIR and MBIR (P > 0.05). Increased overestimation was associated with smaller plaques, eccentricity and calcification (P < 0.001). Reproducibility of plaque burden by CTA and IVUS datasets was excellent with a low mean intra-/inter-reader variability of <1/<4% for CTA and <0.5/<1% for IVUS respectively (P < 0.05) with no significant difference between CT reconstruction algorithms (P > 0.05). In ex vivo coronary arteries, plaque burden by coronary CTA had extremely low inter-/intra-reader variability and correlated significantly with IVUS measurements. Accuracy as well as reader reliability were independent of CT image reconstruction algorithm. (orig.)

  3. Effects of acquisition time and reconstruction algorithm on image quality, quantitative parameters, and clinical interpretation of myocardial perfusion imaging

    DEFF Research Database (Denmark)

    Enevoldsen, Lotte H; Menashi, Changez A K; Andersen, Ulrik B

    2013-01-01

    time (HT) protocols and Evolution for Cardiac Software. METHODS: We studied 45 consecutive, non-selected patients referred for a clinically indicated routine 2-day stress/rest (99m)Tc-Sestamibi myocardial perfusion SPECT. All patients underwent an FT and an HT scan. Both FT and HT scans were processed......-RR) and for quantitative analysis (FT-FBP, HT-FBP, and HT-RR). The datasets were analyzed using commercially available QGS/QPS software and read by two observers evaluating image quality and clinical interpretation. Image quality was assessed on a 10-cm visual analog scale score. RESULTS: HT imaging was associated......: Use of RR reconstruction algorithms compensates for loss of image quality associated with reduced scan time. Both HT acquisition and RR reconstruction algorithm had significant effects on motion and perfusion parameters obtained with standard software, but these effects were relatively small...

  4. Reconstruction of convex bodies from surface tensors

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus

    . The output of the reconstruction algorithm is a polytope P, where the surface tensors of P and K are identical up to rank s. We establish a stability result based on a generalization of Wirtinger’s inequality that shows that for large s, two convex bodies are close in shape when they have identical surface...... that are translates of each other. An algorithm for reconstructing an unknown convex body in R 2 from its surface tensors up to a certain rank is presented. Using the reconstruction algorithm, the shape of an unknown convex body can be approximated when only a finite number s of surface tensors are available...... tensors up to rank s. This is used to establish consistency of the developed reconstruction algorithm....

  5. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...

  6. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    International Nuclear Information System (INIS)

    Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-01-01

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  7. Final Technical Report: Sparse Grid Scenario Generation and Interior Algorithms for Stochastic Optimization in a Parallel Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Mehrotra, Sanjay [Northwestern Univ., Evanston, IL (United States)

    2016-09-07

    The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting our main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.

  8. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  9. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    Science.gov (United States)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  10. A platform for Image Reconstruction in X-ray Imaging: Medical Applications using CBCT and DTS algorithms

    Directory of Open Access Journals (Sweden)

    Zacharias Kamarianakis

    2014-07-01

    Full Text Available This paper presents the architecture of a software platform implemented in C++, for the purpose of testing and evaluation of reconstruction algorithms in X-ray imaging. The fundamental elements of the platform are classes, tightened together in a logical hierarchy. Real world objects as an X-ray source or a flat detector can be defined and implemented as instances of corresponding classes. Various operations (e.g. 3D transformations, loading, saving, filtering of images, creation of planar or curved objects of various dimensions have been incorporated in the software tool as class methods, as well. The user can easily set up any arrangement of the imaging chain objects in 3D space and experiment with many different trajectories and configurations. Selected 3D volume reconstructions using simulated data acquired in specific scanning trajectories are used as a demonstration of the tool. The platform is considered as a basic tool for future investigations of new reconstruction methods in combination with various scanning configurations.

  11. Parallel Algorithm for Reconstruction of TAC Images

    International Nuclear Information System (INIS)

    Vidal Gimeno, V.

    2012-01-01

    The algebraic reconstruction methods are based on solving a system of linear equations. In a previous study, was used and showed as the PETSc library, was and is a scientific computing tool, which facilitates and enables the optimal use of a computer system in the image reconstruction process.

  12. Algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images

    International Nuclear Information System (INIS)

    Ogino, Takashi; Egawa, Sunao

    1991-01-01

    New algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images were developed. One, designated plane weighting method, is to correct CT value in proportion to the position of the beam element passing through the voxel. The other, designated solid weighting method, is to correct CT value in proportion to the length of the beam element passing through the voxel and the volume of voxel. Phantom experiments showed fair spatial resolution in the transverse direction. In the longitudinal direction, however, spatial resolution of under slice thickness could not be obtained. Contrast resolution was equivalent for both methods. In patient studies, the reconstructed radiotherapy simulation image was almost similar in visual perception of the density resolution to a simulation film taken by X-ray simulator. (author)

  13. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  14. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    Energy Technology Data Exchange (ETDEWEB)

    Shieh, Chun-Chien [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia and Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia); Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Kuncic, Zdenka [Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia)

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  15. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Dodge, C.T.; Rong, J. [MD Anderson Cancer Center, Houston, TX (United States); Dodge, C.W. [Methodist Hospital, Houston, TX (United States)

    2014-06-15

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% and 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.

  16. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    International Nuclear Information System (INIS)

    Dodge, C.T.; Rong, J.; Dodge, C.W.

    2014-01-01

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% and 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary

  17. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm

    Science.gov (United States)

    Patra, Rusha; Dutta, Pranab K.

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  18. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    International Nuclear Information System (INIS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-01-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)

  19. Compositional-prior-guided image reconstruction algorithm for multi-modality imaging

    Science.gov (United States)

    Fang, Qianqian; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.

    2010-01-01

    The development of effective multi-modality imaging methods typically requires an efficient information fusion model, particularly when combining structural images with a complementary imaging modality that provides functional information. We propose a composition-based image segmentation method for X-ray digital breast tomosynthesis (DBT) and a structural-prior-guided image reconstruction for a combined DBT and diffuse optical tomography (DOT) breast imaging system. Using the 3D DBT images from 31 clinically measured healthy breasts, we create an empirical relationship between the X-ray intensities for adipose and fibroglandular tissue. We use this relationship to then segment another 58 healthy breast DBT images from 29 subjects into compositional maps of different tissue types. For each breast, we build a weighted-graph in the compositional space and construct a regularization matrix to incorporate the structural priors into a finite-element-based DOT image reconstruction. Use of the compositional priors enables us to fuse tissue anatomy into optical images with less restriction than when using a binary segmentation. This allows us to recover the image contrast captured by DOT but not by DBT. We show that it is possible to fine-tune the strength of the structural priors by changing a single regularization parameter. By estimating the optical properties for adipose and fibroglandular tissue using the proposed algorithm, we found the results are comparable or superior to those estimated with expert-segmentations, but does not involve the time-consuming manual selection of regions-of-interest. PMID:21258460

  20. Ant-Based Phylogenetic Reconstruction (ABPR: A new distance algorithm for phylogenetic estimation based on ant colony optimization

    Directory of Open Access Journals (Sweden)

    Karla Vittori

    2008-12-01

    Full Text Available We propose a new distance algorithm for phylogenetic estimation based on Ant Colony Optimization (ACO, named Ant-Based Phylogenetic Reconstruction (ABPR. ABPR joins two taxa iteratively based on evolutionary distance among sequences, while also accounting for the quality of the phylogenetic tree built according to the total length of the tree. Similar to optimization algorithms for phylogenetic estimation, the algorithm allows exploration of a larger set of nearly optimal solutions. We applied the algorithm to four empirical data sets of mitochondrial DNA ranging from 12 to 186 sequences, and from 898 to 16,608 base pairs, and covering taxonomic levels from populations to orders. We show that ABPR performs better than the commonly used Neighbor-Joining algorithm, except when sequences are too closely related (e.g., population-level sequences. The phylogenetic relationships recovered at and above species level by ABPR agree with conventional views. However, like other algorithms of phylogenetic estimation, the proposed algorithm failed to recover expected relationships when distances are too similar or when rates of evolution are very variable, leading to the problem of long-branch attraction. ABPR, as well as other ACO-based algorithms, is emerging as a fast and accurate alternative method of phylogenetic estimation for large data sets.

  1. Jini service to reconstruct tomographic data

    Science.gov (United States)

    Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.

    2002-06-01

    A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.

  2. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    Energy Technology Data Exchange (ETDEWEB)

    Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)

    2015-01-15

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  3. SU-E-I-39: Combining Conventional Tomographic Imaging Strategy and Interior Tomography for Low Dose Dual-Energy CT (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q [School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi 710049 (China); Department of Radiation Oncology, Stanford University, Stanford, CA (United States); Xing, L [Department of Radiation Oncology, Stanford University, Stanford, CA (United States); Xiong, G; Elmore, K; Min, J [Dalio Institute of Cardiovascular Imaging, New York- Presbyterian Hospital and Weill Cornell Medical College, New York, NY (United States)

    2015-06-15

    Purpose: Dual-energy CT (DECT) affords quantitative information of tissue density and provides a new dimension for disease diagnosis and treatment planning. The technique, however, increases the imaging dose because of the doubled scans, and thus hinders its widespread clinical applications. The purpose of this work is to develop a novel hybrid DECT image acquisition and reconstruction strategy, in which one of the energies is dealt by interior tomography while the other one is obtained using conventional tomography approach. Methods: In the proposed hybrid imaging strategy, the projection data of one of the energies (e.g., high-energy) were acquired and processed in an interior scanning model, whereas the other energy in the conventional tomographic approach. It known that, if the ROI is piecewise constant or polynomial, the interior ROI can be reconstructed with TV or HOT minimization. Here we extend the TV based interior reconstruction method into dual-energy situation. The ROI images so obtained were overlaid in the context of conventional CT of the companion energy. A material based composition in ROI was used in the proposed reconstruction framework. Results: In the simulation experiment with a diagnostic DECT geometry and energies, we were able to derive the densities of soft-tissues and bones in the ROI with high fidelity. In the experimental CBCT study, both kV and MV data were collected using the on-board kV and MV imaging system. The MV data were truncated only across the ROI. Using the interior tomography reconstruction above, we were able to obtain the ROI images as that obtained using un-truncated MV data with known tissue densities. Conclusion: The proposed DECT imaging strategy provides an effective way to extract tissue density information in the ROI and in the context of anatomical images of CT imaging, with much reduced imaging dose.

  4. Distributed 3-D iterative reconstruction for quantitative SPECT

    International Nuclear Information System (INIS)

    Ju, Z.W.; Frey, E.C.; Tsui, B.M.W.

    1995-01-01

    The authors describe a distributed three dimensional (3-D) iterative reconstruction library for quantitative single-photon emission computed tomography (SPECT). This library includes 3-D projector-backprojector pairs (PBPs) and distributed 3-D iterative reconstruction algorithms. The 3-D PBPs accurately and efficiently model various combinations of the image degrading factors including attenuation, detector response and scatter response. These PBPs were validated by comparing projection data computed using the projectors with that from direct Monte Carlo (MC) simulations. The distributed 3-D iterative algorithms spread the projection-backprojection operations for all the projection angles over a heterogeneous network of single or multi-processor computers to reduce the reconstruction time. Based on a master/slave paradigm, these distributed algorithms provide dynamic load balancing and fault tolerance. The distributed algorithms were verified by comparing images reconstructed using both the distributed and non-distributed algorithms. Computation times for distributed 3-D reconstructions running on up to 4 identical processors were reduced by a factor approximately 80--90% times the number of the processors participating, compared to those for non-distributed 3-D reconstructions running on a single processor. When combined with faster affordable computers, this library provides an efficient means for implementing accurate reconstruction and compensation methods to improve quality and quantitative accuracy in SPECT images

  5. Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

    NARCIS (Netherlands)

    Asadi, A.R.; Roos, C.

    2015-01-01

    In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.

  6. Interior point decoding for linear vector channels

    International Nuclear Information System (INIS)

    Wadayama, T

    2008-01-01

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

  7. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  8. Detector independent cellular automaton algorithm for track reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kisel, Ivan; Kulakov, Igor; Zyzak, Maksym [Goethe Univ. Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration

    2013-07-01

    Track reconstruction is one of the most challenging problems of data analysis in modern high energy physics (HEP) experiments, which have to process per second of the order of 10{sup 7} events with high track multiplicity and density, registered by detectors of different types and, in many cases, located in non-homogeneous magnetic field. Creation of reconstruction package common for all experiments is considered to be important in order to consolidate efforts. The cellular automaton (CA) track reconstruction approach has been used successfully in many HEP experiments. It is very simple, efficient, local and parallel. Meanwhile it is intrinsically independent of detector geometry and good candidate for common track reconstruction. The CA implementation for the CBM experiment has been generalized and applied to the ALICE ITS and STAR HFT detectors. Tests with simulated collisions have been performed. The track reconstruction efficiencies are at the level of 95% for majority of the signal tracks for all detectors.

  9. Tilted cone-beam reconstruction with row-wise fan-to-parallel rebinning

    International Nuclear Information System (INIS)

    Hsieh Jiang; Tang Xiangyang

    2006-01-01

    Reconstruction algorithms for cone-beam CT have been the focus of many studies. Several exact and approximate reconstruction algorithms were proposed for step-and-shoot and helical scanning trajectories to combat cone-beam related artefacts. In this paper, we present a new closed-form cone-beam reconstruction formula for tilted gantry data acquisition. Although several algorithms were proposed in the past to combat errors induced by the gantry tilt, none of the algorithms addresses the scenario in which the cone-beam geometry is first rebinned to a set of parallel beams prior to the filtered backprojection. We show that the image quality advantages of the rebinned parallel-beam reconstruction are significant, which makes the development of such an algorithm necessary. Because of the rebinning process, the reconstruction algorithm becomes more complex and the amount of iso-centre adjustment depends not only on the projection and tilt angles, but also on the reconstructed pixel location. In this paper, we first demonstrate the advantages of the row-wise fan-to-parallel rebinning and derive a closed-form solution for the reconstruction algorithm for the step-and-shoot and constant-pitch helical scans. The proposed algorithm requires the 'warping' of the reconstruction matrix on a view-by-view basis prior to the backprojection step. We further extend the algorithm to the variable-pitch helical scans in which the patient table travels at non-constant speeds. The algorithm was tested extensively on both the 16- and 64-slice CT scanners. The efficacy of the algorithm is clearly demonstrated by multiple experiments

  10. Reconstruction of convex bodies from surface tensors

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus

    2016-01-01

    We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. When only measurements subject to noise...... of surface tensors are available for reconstruction, we recommend to use certain values of the surface tensors, namely harmonic intrinsic volumes instead of the surface tensors evaluated at the standard basis. The second algorithm we present is based on harmonic intrinsic volumes and allows for noisy...... measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based on measurements subject to noise is established under certain assumptions on the noise...

  11. Diagnostic performance of reduced-dose CT with a hybrid iterative reconstruction algorithm for the detection of hypervascular liver lesions: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Nakamoto, Atsushi; Tanaka, Yoshikazu; Juri, Hiroshi; Nakai, Go; Narumi, Yoshifumi [Osaka Medical College, Department of Radiology, Takatsuki, Osaka (Japan); Yoshikawa, Shushi [Osaka Medical College Hospital, Central Radiology Department, Takatsuki, Osaka (Japan)

    2017-07-15

    To investigate the diagnostic performance of reduced-dose CT with a hybrid iterative reconstruction (IR) algorithm for the detection of hypervascular liver lesions. Thirty liver phantoms with or without simulated hypervascular lesions were scanned with a 320-slice CT scanner with control-dose (40 mAs) and reduced-dose (30 and 20 mAs) settings. Control-dose images were reconstructed with filtered back projection (FBP), and reduced-dose images were reconstructed with FBP and a hybrid IR algorithm. Objective image noise and the lesion to liver contrast-to-noise ratio (CNR) were evaluated quantitatively. Images were interpreted independently by 2 blinded radiologists, and jackknife alternative free-response receiver-operating characteristic (JAFROC) analysis was performed. Hybrid IR images with reduced-dose settings (both 30 and 20 mAs) yielded significantly lower objective image noise and higher CNR than control-dose FBP images (P <.05). However, hybrid IR images with reduced-dose settings had lower JAFROC1 figure of merit than control-dose FBP images, although only the difference between 20 mAs images and control-dose FBP images was significant for both readers (P <.01). An aggressive reduction of the radiation dose would impair the detectability of hypervascular liver lesions, although objective image noise and CNR would be preserved by a hybrid IR algorithm. (orig.)

  12. Resilient algorithms for reconstructing and simulating gappy flow fields in CFD

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seungjoon; Karniadakis, George Em [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Kevrekidis, Ioannis G, E-mail: george_karniadakis@brown.edu [Department of Chemical and Biological Engineering, Brown University, Providence, RI 02912 (United States); PACM, Princeton University, Princeton, NJ 08544 (United States)

    2015-10-15

    It is anticipated that in future generations of massively parallel computer systems a significant portion of processors may suffer from hardware or software faults rendering large-scale computations useless. In this work we address this problem from the algorithmic side, proposing resilient algorithms that can recover from such faults irrespective of their fault origin. In particular, we set the foundations of a new class of algorithms that will combine numerical approximations with machine learning methods. To this end, we consider three types of fault scenarios: (1) a gappy region but with no previous gaps and no contamination of surrounding simulation data, (2) a space–time gappy region but with full spatiotemporal information and no contamination, and (3) previous gaps with contamination of surrounding data. To recover from such faults we employ different reconstruction and simulation methods, namely the projective integration, the co-Kriging interpolation, and the resimulation method. In order to compare the effectiveness of these methods for the different processor faults and to quantify the error propagation in each case, we perform simulations of two benchmark flows, flow in a cavity and flow past a circular cylinder. In general, the projective integration seems to be the most effective method when the time gaps are small, and the resimulation method is the best when the time gaps are big while the co-Kriging method is independent of time gaps. Furthermore, the projective integration method and the co-Kriging method are found to be good estimation methods for the initial and boundary conditions of the resimulation method in scenario (3). (paper)

  13. Molecular Infectious Disease Epidemiology: Survival Analysis and Algorithms Linking Phylogenies to Transmission Trees

    Science.gov (United States)

    Kenah, Eben; Britton, Tom; Halloran, M. Elizabeth; Longini, Ira M.

    2016-01-01

    Recent work has attempted to use whole-genome sequence data from pathogens to reconstruct the transmission trees linking infectors and infectees in outbreaks. However, transmission trees from one outbreak do not generalize to future outbreaks. Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission. In a survival analysis framework, estimation of transmission parameters is based on sums or averages over the possible transmission trees. A phylogeny can increase the precision of these estimates by providing partial information about who infected whom. The leaves of the phylogeny represent sampled pathogens, which have known hosts. The interior nodes represent common ancestors of sampled pathogens, which have unknown hosts. Starting from assumptions about disease biology and epidemiologic study design, we prove that there is a one-to-one correspondence between the possible assignments of interior node hosts and the transmission trees simultaneously consistent with the phylogeny and the epidemiologic data on person, place, and time. We develop algorithms to enumerate these transmission trees and show these can be used to calculate likelihoods that incorporate both epidemiologic data and a phylogeny. A simulation study confirms that this leads to more efficient estimates of hazard ratios for infectiousness and baseline hazards of infectious contact, and we use these methods to analyze data from a foot-and-mouth disease virus outbreak in the United Kingdom in 2001. These results demonstrate the importance of data on individuals who escape infection, which is often overlooked. The combination of survival analysis and algorithms linking phylogenies to transmission trees is a rigorous but flexible statistical foundation for molecular infectious disease epidemiology. PMID:27070316

  14. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    Science.gov (United States)

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  15. Reconstruction of an InAs nanowire using geometric tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, Stefan; Alpers, Andreas

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  16. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  17. Improvement of Quality of Reconstructed Images in Multi-Frame Fresnel Digital Holography

    International Nuclear Information System (INIS)

    Xiao-Wei, Lu; Jing-Zhen, Li; Hong-Yi, Chen

    2010-01-01

    A modified reconstruction algorithm to improve the quality of reconstructed images of multi-frame Fresnel digital holography is presented. When the reference beams are plane or spherical waves with azimuth encoding, by introducing two spherical wave factors, images can be reconstructed with only one time Fourier transform. In numerical simulation, this algorithm could simplify the reconstruction process and improve the signal-to-noise ratio of the reconstructed images. In single-frame reconstruction experiments, the accurate reconstructed image is obtained with this simplified algorithm

  18. MR image reconstruction via guided filter.

    Science.gov (United States)

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  19. Optimization-based reconstruction for reduction of CBCT artifact in IGRT

    Science.gov (United States)

    Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan

    2016-04-01

    Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.

  20. Arterial cross-section reconstruction from bi-plane X-ray shadowgraphs

    International Nuclear Information System (INIS)

    Fenster, P.; Barba, J.; Suardiaz, M.; Wong, K.K.; Herrold, E.M.; Kenet, R.O.; Borer, J.S.

    1987-01-01

    An iterative algorithm was used to reconstruct the cross-section of a crescent-shaped vessel lumen. The accuracy of this method was evaluated by comparing reconstruction error in the presence of Gaussian noise of different magnitudes. The sensitivity of the reconstruction algorithm to initial boundary constraint error was also evaluated. Using this algorithm, the reconstruction error was 1.7% when applied to crescent shaped lumen. This error was found to increase significantly with increasing Gaussian noise (error range: 5.75 - 20.75% for noise variance 1-8 pixels). Median filtering of the noise-degraded density profiles improved the reconstruction error (error range: 3.25 - 10.5% for noise variance 1-8 pixels). The mean reconstruction error was not strongly influenced by initial boundary constraint error. The algorithm provided reasonable reconstruction results when applied to X-ray image derived density profiles of known luminal shapes

  1. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  2. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    Science.gov (United States)

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  3. Linear programming mathematics, theory and algorithms

    CERN Document Server

    1996-01-01

    Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

  4. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  5. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    Science.gov (United States)

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.

  6. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275 (China); Zhang, Jiahan; Lipson, Edward [Department of Physics, Syracuse University, Syracuse, New York 13244 (United States); Krol, Andrzej; Feiglin, David [Department of Radiology, SUNY Upstate Medical University, Syracuse, New York 13210 (United States); Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vogelsang, Levon [Carestream Health, Rochester, New York 14608 (United States); Shen, Lixin [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275, China and Department of Mathematics, Syracuse University, Syracuse, New York 13244 (United States)

    2015-08-15

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  7. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    International Nuclear Information System (INIS)

    Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin

    2015-01-01

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  8. Automated reconstruction of 3D models from real environments

    Science.gov (United States)

    Sequeira, V.; Ng, K.; Wolfart, E.; Gonçalves, J. G. M.; Hogg, D.

    This paper describes an integrated approach to the construction of textured 3D scene models of building interiors from laser range data and visual images. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction, known as the EST (Environmental Sensor for Telepresence). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. The paper presents the main components of the EST/AEST, and presents some example results obtained from the prototypes. The reconstructed model is encoded in VRML format so that it is possible to access and view the model via the World Wide Web.

  9. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    Science.gov (United States)

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P  = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P  < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  10. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units.

    Science.gov (United States)

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A

    2013-02-01

    Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.

  11. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)

    2016-06-15

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  12. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    International Nuclear Information System (INIS)

    Woods, K; DiCostanzo, D; Gupta, N

    2016-01-01

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  13. Multiperiod hydrothermal economic dispatch by an interior point method

    Directory of Open Access Journals (Sweden)

    Kimball L. M.

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  14. General surface reconstruction for cone-beam multislice spiral computed tomography

    International Nuclear Information System (INIS)

    Chen Laigao; Liang Yun; Heuscher, Dominic J.

    2003-01-01

    A new family of cone-beam reconstruction algorithm, the General Surface Reconstruction (GSR), is proposed and formulated in this paper for multislice spiral computed tomography (CT) reconstructions. It provides a general framework to allow the reconstruction of planar or nonplanar surfaces on a set of rebinned short-scan parallel beam projection data. An iterative surface formation method is proposed as an example to show the possibility to form nonplanar reconstruction surfaces to minimize the adverse effect between the collected cone-beam projection data and the reconstruction surfaces. The improvement in accuracy of the nonplanar surfaces over planar surfaces in the two-dimensional approximate cone-beam reconstructions is mathematically proved and demonstrated using numerical simulations. The proposed GSR algorithm is evaluated by the computer simulation of cone-beam spiral scanning geometry and various mathematical phantoms. The results demonstrate that the GSR algorithm generates much better image quality compared to conventional multislice reconstruction algorithms. For a table speed up to 100 mm per rotation, GSR demonstrates good image quality for both the low-contrast ball phantom and thorax phantom. All other performance parameters are comparable to the single-slice 180 deg. LI (linear interpolation) algorithm, which is considered the 'gold standard'. GSR also achieves high computing efficiency and good temporal resolution, making it an attractive alternative for the reconstruction of next generation multislice spiral CT data

  15. MR fingerprinting reconstruction with Kalman filter.

    Science.gov (United States)

    Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping

    2017-09-01

    Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Comparison of frequency difference reconstruction algorithms for the detection of acute stroke using EIT in a realistic head-shaped tank

    International Nuclear Information System (INIS)

    Packham, B; Koo, H; Romsauerova, A; Holder, D S; Ahn, S; Jun, S C; McEwan, A

    2012-01-01

    Imaging of acute stroke might be possible using multi-frequency electrical impedance tomography (MFEIT) but requires absolute or frequency difference imaging. Simple linear frequency difference reconstruction has been shown to be ineffective in imaging with a frequency-dependant background conductivity; this has been overcome with a weighted frequency difference approach with correction for the background but this has only been validated for a cylindrical and hemispherical tank. The feasibility of MFEIT for imaging of acute stroke in a realistic head geometry was examined by imaging a potato perturbation against a saline background and a carrot-saline frequency-dependant background conductivity, in a head-shaped tank with the UCLH Mk2.5 MFEIT system. Reconstruction was performed with time difference (TD), frequency difference (FD), FD adjacent (FDA), weighted FD (WFD) and weighted FDA (WFDA) linear algorithms. The perturbation in reconstructed images corresponded to the true position to <9.5% of image diameter with an image SNR of >5.4 for all algorithms in saline but only for TD, WFDA and WFD in the carrot-saline background. No reliable imaging was possible with FD and FDA. This indicates that the WFD approach is also effective for a realistic head geometry and supports its use for human imaging in the future. (paper)

  17. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  18. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Albert, A.; Drouhin, D.; Racca, C.; Andre, M.; Anghinolfi, M.; Anton, G.; Folger, F.; Graf, K.; Hallmann, S.; Hoessl, J.; Hofestaedt, J.; James, C.W.; Kalekin, O.; Katz, U.; Kiessling, D.; Lahmann, R.; Sieger, C.; Ardid, M.; Felis, I.; Martinez-Mora, J.A.; Saldana, M.; Aubert, J.J.; Bertin, V.; Brunner, J.; Busto, J.; Carr, J.; Costantini, H.; Coyle, P.; Dornic, D.; Enzenhoefer, A.; Quinn, L.; Salvadori, I.; Turpin, D.; Avgitas, T.; Baret, B.; Bourret, S.; Coelho, J.A.B.; Creusot, A.; Galata, S.; Gregoire, T.; Gracia Ruiz, R.; Lachaud, C.; Barrios-Marti, J.; Hernandez-Rey, J.J.; Illuminati, G.; Lotze, M.; Toennis, C.; Zornoza, J.D.; Zuniga, J.; Basa, S.; Marcelin, M.; Nezri, E.; Biagi, S.; Coniglione, R.; Distefano, C.; Piattelli, P.; Riccobene, G.; Sapienza, P.; Trovato, A.; Bormuth, R.; Jong, M. de; Samtleben, D.F.E.; Bouwhuis, M.C.; Heijboer, A.J.; Jongen, M.; Michael, T.; Bruijn, R.; Melis, K.; Capone, A.; De Bonis, G.; Di Palma, I.; Perrina, C.; Vizzoca, A.; Caramete, L.; Pavalas, G.E.; Popa, V.; Celli, S.; Chiarusi, T.; Circella, M.; Sanchez-Losa, A.; Coleiro, A.; Deschamps, A.; Hello, Y.; Domi, A.; Hugon, C.; Sanguineti, M.; Taiuti, M.; Donzaud, C.; Eberl, T.; El Bojaddaini, I.; Moussa, A.; Elsaesser, D.; Kadler, M.; Kreter, M.; Fusco, L.A.; Margiotta, A.; Pellegrino, C.; Spurio, M.; Versari, F.; Gay, P.; Giordano, V.; Glotin, H.; Haren, H. van; Kouchner, A.; Van Elewyck, V.; Kreykenbohm, I.; Wilms, J.; Kulikovskiy, V.; Lefevre, D.; Leonora, E.; Loucatos, S.; Vallage, B.; Marinelli, A.; Mele, R.; Vivolo, D.; Migliozzi, P.; Organokov, M.; Pradier, T.; Schuessler, F.; Stolarczyk, T.; Tayalati, Y.

    2017-01-01

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6 "c"i"r"c"l"e for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with reconstructed shower energies above 10 TeV are observed. This is consistent with the expectation of about 5 events from atmospheric backgrounds, but also compatible with diffuse astrophysical flux measurements by the IceCube collaboration, from which 2-4 additional events are expected. A 90% C.L. upper limit on the diffuse astrophysical neutrino flux with a value per neutrino flavour of E"2 . Φ"9"0"% = 4.9 . 10"-"8 GeV . cm"-"2 . s"-"1 . sr"-"1 is set, applicable to the energy range from 23 TeV to 7.8 PeV, assuming an unbroken E"-"2 spectrum and neutrino flavour equipartition at Earth. (orig.)

  19. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope

    Energy Technology Data Exchange (ETDEWEB)

    Albert, A.; Drouhin, D.; Racca, C. [GRPHE, Universite de Haute Alsace, Institut universitaire de technologie de Colmar, 34 rue du Grillenbreit, BP 50568, Colmar (France); Andre, M. [Technical University of Catalonia, Laboratory of Applied Bioacoustics, Rambla Exposicio, Barcelona (Spain); Anghinolfi, M. [INFN-Sezione di Genova, Genoa (Italy); Anton, G.; Folger, F.; Graf, K.; Hallmann, S.; Hoessl, J.; Hofestaedt, J.; James, C.W.; Kalekin, O.; Katz, U.; Kiessling, D.; Lahmann, R.; Sieger, C. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erlangen (Germany); Ardid, M.; Felis, I.; Martinez-Mora, J.A.; Saldana, M. [Universitat Politecnica de Valencia, Institut d' Investigacio per a la Gestio Integrada de les Zones Costaneres (IGIC), Gandia (Spain); Aubert, J.J.; Bertin, V.; Brunner, J.; Busto, J.; Carr, J.; Costantini, H.; Coyle, P.; Dornic, D.; Enzenhoefer, A.; Quinn, L.; Salvadori, I.; Turpin, D. [Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille (France); Avgitas, T.; Baret, B.; Bourret, S.; Coelho, J.A.B.; Creusot, A.; Galata, S.; Gregoire, T.; Gracia Ruiz, R.; Lachaud, C. [APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cite, Paris (France); Barrios-Marti, J.; Hernandez-Rey, J.J.; Illuminati, G.; Lotze, M.; Toennis, C.; Zornoza, J.D.; Zuniga, J. [IFIC, Instituto de Fisica Corpuscular (CSIC-Universitat de Valencia) c/Catedratico Jose Beltran, 2, 46980, Paterna, Valencia (Spain); Basa, S.; Marcelin, M.; Nezri, E. [LAM, Laboratoire d' Astrophysique de Marseille, Pole de l' Etoile Site de Chateau-Gombert, Marseille Cedex 13 (France); Biagi, S.; Coniglione, R.; Distefano, C.; Piattelli, P.; Riccobene, G.; Sapienza, P.; Trovato, A. [INFN, Laboratori Nazionali del Sud (LNS), Catania (Italy); Bormuth, R.; Jong, M. de; Samtleben, D.F.E. [Nikhef, Amsterdam (Netherlands); Universiteit Leiden, Huygens-Kamerlingh Onnes Laboratorium, Leiden (Netherlands); Bouwhuis, M.C.; Heijboer, A.J.; Jongen, M.; Michael, T. [Nikhef, Amsterdam (Netherlands); Bruijn, R.; Melis, K. [Nikhef, Amsterdam (Netherlands); Universiteit van Amsterdam, Instituut voor Hoge-Energie Fysica, Amsterdam (Netherlands); Capone, A.; De Bonis, G.; Di Palma, I.; Perrina, C.; Vizzoca, A. [INFN, Sezione di Roma, Rome (Italy); Dipartimento di Fisica dell' Universita La Sapienza, Rome (Italy); Caramete, L.; Pavalas, G.E.; Popa, V. [Institute for Space Science, 077125, Bucharest, Magurele (Romania); Celli, S. [INFN, Sezione di Roma, Rome (Italy); Dipartimento di Fisica dell' Universita La Sapienza, Rome (Italy); Gran Sasso Science Institute, L' Aquila (Italy); Chiarusi, T. [INFN, Sezione di Bologna, Bologna (Italy); Circella, M.; Sanchez-Losa, A. [INFN, Sezione di Bari, Bari (Italy); Coleiro, A. [APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cite, Paris (France); IFIC, Instituto de Fisica Corpuscular (CSIC-Universitat de Valencia) c/Catedratico Jose Beltran, 2, 46980, Paterna, Valencia (Spain); Deschamps, A.; Hello, Y. [CNRS, IRD, Observatoire de la Cote d' Azur, Geoazur, UCA, Sophia Antipolis (France); Domi, A.; Hugon, C.; Sanguineti, M.; Taiuti, M. [INFN-Sezione di Genova, Genoa (Italy); Dipartimento di Fisica dell' Universita, Genoa (Italy); Donzaud, C. [APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cite, Paris (France); Universite Paris-Sud, Orsay Cedex (France); Eberl, T. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erlangen (Germany); El Bojaddaini, I.; Moussa, A. [University Mohammed I, Laboratory of Physics of Matter and Radiations, B.P.717, Oujda (Morocco); Elsaesser, D.; Kadler, M.; Kreter, M. [Universitaet Wuerzburg, Institut fuer Theoretische Physik und Astrophysik, Wuerzburg (Germany); Fusco, L.A.; Margiotta, A.; Pellegrino, C.; Spurio, M.; Versari, F. [INFN, Sezione di Bologna, Bologna (Italy); Dipartimento di Fisica e Astronomia dell' Universita, Bologna (Italy); Gay, P. [APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cite, Paris (France); Universite Blaise Pascal, CNRS/IN2P3, Laboratoire de Physique Corpusculaire, Clermont Universite, BP 10448, Clermont-Ferrand (France); Giordano, V. [INFN, Sezione di Catania, Catania (Italy); Glotin, H. [LSIS, Aix Marseille Universite CNRS ENSAM LSIS UMR 7296, Marseille (France); Universite de Toulon CNRS LSIS UMR 7296, La Garde (France); Institut Universitaire de France, Paris (France); Haren, H. van [Royal Netherlands Institute for Sea Research (NIOZ), ' t Horntje (Texel) (Netherlands); Kouchner, A.; Van Elewyck, V. [APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cite (France); Institut Universitaire de France, Paris (France); Kreykenbohm, I.; Wilms, J. [Universitaet Erlangen-Nuernberg, Dr. Remeis-Sternwarte and ECAP, Bamberg (Germany); Kulikovskiy, V. [Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille (France); Skobeltsyn Institute of Nuclear Physics, Moscow State University, Moscow (RU); Lefevre, D. [Aix-Marseille University, Mediterranean Institute of Oceanography (MIO), Marseille Cedex 9 (FR); Universite du Sud Toulon-Var, CNRS-INSU/IRD UM 110, La Garde Cedex (FR); Leonora, E. [INFN, Sezione di Catania, Catania (IT); Dipartimento di Fisica ed Astronomia dell' Universita, Catania (IT); Loucatos, S.; Vallage, B. [APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cite, Paris (FR); Direction des Sciences de la Matiere, Institut de Recherche sur les Lois Fondamentales de l' Univers, Service de Physique des Particules, CEA Saclay, Gif-sur-Yvette (FR); Marinelli, A. [INFN, Sezione di Pisa, Pisa (IT); Dipartimento di Fisica dell' Universita, Pisa (IT); Mele, R.; Vivolo, D. [INFN, Sezione di Napoli, Naples (IT); Dipartimento di Fisica dell' Universita Federico II di Napoli, Naples (IT); Migliozzi, P. [INFN, Sezione di Napoli, Naples (IT); Organokov, M.; Pradier, T. [Universite de Strasbourg, CNRS, IPHC UMR 7178, Strasbourg (FR); Schuessler, F.; Stolarczyk, T. [Direction des Sciences de la Matiere, Institut de Recherche sur les Lois Fondamentales de l' Univers, Service de Physique des Particules, CEA Saclay, Gif-sur-Yvette (FR); Tayalati, Y. [University Mohammed V in Rabat, Faculty of Sciences, Rabat (MA)

    2017-06-15

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6 {sup circle} for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with reconstructed shower energies above 10 TeV are observed. This is consistent with the expectation of about 5 events from atmospheric backgrounds, but also compatible with diffuse astrophysical flux measurements by the IceCube collaboration, from which 2-4 additional events are expected. A 90% C.L. upper limit on the diffuse astrophysical neutrino flux with a value per neutrino flavour of E{sup 2} . Φ{sup 90%} = 4.9 . 10{sup -8} GeV . cm{sup -2} . s{sup -1} . sr{sup -1} is set, applicable to the energy range from 23 TeV to 7.8 PeV, assuming an unbroken E{sup -2} spectrum and neutrino flavour equipartition at Earth. (orig.)

  20. Vertex reconstruction in CMS

    International Nuclear Information System (INIS)

    Chabanat, E.; D'Hondt, J.; Estre, N.; Fruehwirth, R.; Prokofiev, K.; Speer, T.; Vanlaer, P.; Waltenberger, W.

    2005-01-01

    Due to the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ('vertex finding') and an estimation problem ('vertex fitting'). Starting from least-squares methods, robustifications of the classical algorithms are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels

  1. Decision making in double-pedicled DIEP and SIEA abdominal free flap breast reconstructions: An algorithmic approach and comprehensive classification.

    Directory of Open Access Journals (Sweden)

    Charles M Malata

    2015-10-01

    Full Text Available Introduction: The deep inferior epigastric artery perforator (DIEP free flap is the gold standard for autologous breast reconstruction. However, using a single vascular pedicle may not yield sufficient tissue in patients with midline scars or insufficient lower abdominal pannus. Double-pedicled free flaps overcome this problem using different vascular arrangements to harvest the entire lower abdominal flap. The literature is, however, sparse regarding technique selection. We therefore reviewed our experience in order to formulate an algorithm and comprehensive classification for this purpose. Methods: All patients undergoing unilateral double-pedicled abdominal perforator free flap breast reconstruction (AFFBR by a single surgeon (CMM over 40 months were reviewed from a prospectively collected database. Results: Of the 112 consecutive breast free flaps performed, 25 (22% utilised two vascular pedicles. The mean patient age was 45 years (range=27-54. All flaps but one (which used the thoracodorsal system were anastomosed to the internal mammary vessels using the rib-preservation technique. The surgical duration was 656 minutes (range=468-690 mins. The median flap weight was 618g (range=432-1275g and the mastectomy weight was 445g (range=220-896g. All flaps were successful and only three patients requested minor liposuction to reduce and reshape their reconstructed breasts.Conclusion: Bipedicled free abdominal perforator flaps, employed in a fifth of all our AFFBRs, are a reliable and safe option for unilateral breast reconstruction. They, however, necessitate clear indications to justify the additional technical complexity and surgical duration. Our algorithm and comprehensive classification facilitate technique selection for the anastomotic permutations and successful execution of these operations.

  2. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  3. Comparison of SeaWinds Backscatter Imaging Algorithms

    Science.gov (United States)

    Long, David G.

    2017-01-01

    This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143

  4. Interior Space: Representation, Occupation, Well-Being and Interiority

    OpenAIRE

    Power, Jacqueline

    2014-01-01

    This article will provide an overview of space as it is understood and engaged with from within the discipline of interior design/interior architecture. Firstly, the term interior will be described. Secondly, the paper will discuss space as a general concept, before exploring what space is speifically for the interior design/interior architecture discipline. How is space understood? What does space "look" like for interuior designers/interior architects?.

  5. Algorithms for Reconstruction of Undersampled Atomic Force Microscopy Images Supplementary Material

    DEFF Research Database (Denmark)

    2017-01-01

    Two Jupyter Notebooks showcasing reconstructions of undersampled atomic force microscopy images. The reconstructions were obtained using a variety of interpolation and reconstruction methods.......Two Jupyter Notebooks showcasing reconstructions of undersampled atomic force microscopy images. The reconstructions were obtained using a variety of interpolation and reconstruction methods....

  6. Cone-beam local reconstruction based on a Radon inversion transformation

    International Nuclear Information System (INIS)

    Wang Xian-Chao; Yan Bin; Li Lei; Hu Guo-En

    2012-01-01

    The local reconstruction from truncated projection data is one area of interest in image reconstruction for computed tomography (CT), which creates the possibility for dose reduction. In this paper, a filtered-backprojection (FBP) algorithm based on the Radon inversion transform is presented to deal with the three-dimensional (3D) local reconstruction in the circular geometry. The algorithm achieves the data filtering in two steps. The first step is the derivative of projections, which acts locally on the data and can thus be carried out accurately even in the presence of data truncation. The second step is the nonlocal Hilbert filtering. The numerical simulations and the real data reconstructions have been conducted to validate the new reconstruction algorithm. Compared with the approximate truncation resistant algorithm for computed tomography (ATRACT), not only it has a comparable ability to restrain truncation artifacts, but also its reconstruction efficiency is improved. It is about twice as fast as that of the ATRACT. Therefore, this work provides a simple and efficient approach for the approximate reconstruction from truncated projections in the circular cone-beam CT

  7. A combined reconstruction-classification method for diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  8. Vertex Reconstruction in CMS

    CERN Document Server

    Chabanat, E; D'Hondt, J; Vanlaer, P; Prokofiev, K; Speer, T; Frühwirth, R; Waltenberger, W

    2005-01-01

    Because of the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ("vertex finding") and an estimation problem ("vertex fitting"). Starting from least-square methods, ways to render the classical algorithms more robust are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels.

  9. Metal artifact reduction image reconstruction algorithm for CT of implanted metal orthopedic devices: a work in progress

    International Nuclear Information System (INIS)

    Liu, Patrick T.; Pavlicek, William P.; Peter, Mary B.; Roberts, Catherine C.; Paden, Robert G.; Spangehl, Mark J.

    2009-01-01

    Despite recent advances in CT technology, metal orthopedic implants continue to cause significant artifacts on many CT exams, often obscuring diagnostic information. We performed this prospective study to evaluate the effectiveness of an experimental metal artifact reduction (MAR) image reconstruction program for CT. We examined image quality on CT exams performed in patients with hip arthroplasties as well as other types of implanted metal orthopedic devices. The exam raw data were reconstructed using two different methods, the standard filtered backprojection (FBP) program and the MAR program. Images were evaluated for quality of the metal-cement-bone interfaces, trabeculae ≤1 cm from the metal, trabeculae 5 cm apart from the metal, streak artifact, and overall soft tissue detail. The Wilcoxon Rank Sum test was used to compare the image scores from the large and small prostheses. Interobserver agreement was calculated. When all patients were grouped together, the MAR images showed mild to moderate improvement over the FBP images. However, when the cases were divided by implant size, the MAR images consistently received higher image quality scores than the FBP images for large metal implants (total hip prostheses). For small metal implants (screws, plates, staples), conversely, the MAR images received lower image quality scores than the FBP images due to blurring artifact. The difference of image scores for the large and small implants was significant (p=0.002). Interobserver agreement was found to be high for all measures of image quality (k>0.9). The experimental MAR reconstruction algorithm significantly improved CT image quality for patients with large metal implants. However, the MAR algorithm introduced blurring artifact that reduced image quality with small metal implants. (orig.)

  10. MO-FG-204-04: How Iterative Reconstruction Algorithms Affect the NPS of CT Images

    International Nuclear Information System (INIS)

    Li, G; Liu, X; Dodge, C; Jensen, C; Rong, J

    2015-01-01

    Purpose: To evaluate how the third generation model based iterative reconstruction (MBIR) compares with filtered back-projection (FBP), adaptive statistical iterative reconstruction (ASiR), and the second generation MBIR based on noise power spectrum (NPS) analysis over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP515 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 19mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 0.984 pitch and reconstructed thickness 2.5mm (VEO3.0: Abd/Pelvis with Texture and NR05). At each CTDIvol level, 10 repeated scans were acquired for achieving sufficient data sampling. The images were reconstructed using Standard kernel with FBP; 20%, 40% and 70% ASiR; and two versions of MBIR (VEO2.0 and 3.0). For evaluating the effect of the ROI spatial location to the Result of NPS, 4 ROI groups were categorized based on their distances from the center of the phantom. Results: VEO3.0 performed inferiorly comparing to VEO2.0 over all dose levels. On the other hand, at low dose levels (less than 3 mGy), it clearly outperformed ASiR and FBP, in NPS values. Therefore, the lower the dose level, the relative performance of MBIR improves. However, the shapes of the NPS show substantial differences in horizontal and vertical sampling dimensions. These differences may determine the characteristics of the noise/texture features in images, and hence, play an important role in image interpretation. Conclusion: The third generation MBIR did not improve over the second generation MBIR in term of NPS analysis. The overall performance of both versions of MBIR improved as compared to other reconstruction algorithms when dose was reduced. The shapes of the NPS curves provided additional value for future characterization of the image noise/texture features

  11. MO-FG-204-04: How Iterative Reconstruction Algorithms Affect the NPS of CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Li, G; Liu, X; Dodge, C; Jensen, C; Rong, J [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: To evaluate how the third generation model based iterative reconstruction (MBIR) compares with filtered back-projection (FBP), adaptive statistical iterative reconstruction (ASiR), and the second generation MBIR based on noise power spectrum (NPS) analysis over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP515 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 19mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 0.984 pitch and reconstructed thickness 2.5mm (VEO3.0: Abd/Pelvis with Texture and NR05). At each CTDIvol level, 10 repeated scans were acquired for achieving sufficient data sampling. The images were reconstructed using Standard kernel with FBP; 20%, 40% and 70% ASiR; and two versions of MBIR (VEO2.0 and 3.0). For evaluating the effect of the ROI spatial location to the Result of NPS, 4 ROI groups were categorized based on their distances from the center of the phantom. Results: VEO3.0 performed inferiorly comparing to VEO2.0 over all dose levels. On the other hand, at low dose levels (less than 3 mGy), it clearly outperformed ASiR and FBP, in NPS values. Therefore, the lower the dose level, the relative performance of MBIR improves. However, the shapes of the NPS show substantial differences in horizontal and vertical sampling dimensions. These differences may determine the characteristics of the noise/texture features in images, and hence, play an important role in image interpretation. Conclusion: The third generation MBIR did not improve over the second generation MBIR in term of NPS analysis. The overall performance of both versions of MBIR improved as compared to other reconstruction algorithms when dose was reduced. The shapes of the NPS curves provided additional value for future characterization of the image noise/texture features.

  12. A smartphone photogrammetry method for digitizing prosthetic socket interiors.

    Science.gov (United States)

    Hernandez, Amaia; Lemaire, Edward

    2017-04-01

    Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person's socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. 15 two-dimensional images of the socket's interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.

  13. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    Directory of Open Access Journals (Sweden)

    Helle Precht

    2016-12-01

    Full Text Available Background Coronary computed tomography angiography (CCTA requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR enhances perceived image quality in CCTA compared to filtered back projection (FBP. Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR] was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004. The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001 for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  14. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  15. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    OpenAIRE

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...

  16. Vertex Reconstruction in ATLAS Run II

    CERN Document Server

    Zhang, Matt; The ATLAS collaboration

    2016-01-01

    Vertex reconstruction is the process of taking reconstructed tracks and using them to determine the locations of proton collisions. In this poster we present the performance of our current vertex reconstruction algorithm, and look at investigations into potential improvements from a new seed finding method.

  17. MREIT experiments with 200 µA injected currents: a feasibility study using two reconstruction algorithms, SMM and harmonic BZ

    International Nuclear Information System (INIS)

    Arpinar, V E; Muftuler, L T; Hamamura, M J; Degirmenci, E

    2012-01-01

    Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10–400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215–25) with Tikhonov regularization and the harmonic B Z proposed by Oh et al (2003 Magn. Reason. Med. 50 875–8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total acquisition

  18. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography; Desarrollo de algoritmos de reconstruccion de imagenes en tomografia de capacitancia electrica

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-12-28

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs.

  19. Interior thermal insulation systems for historical building envelopes

    Science.gov (United States)

    Jerman, Miloš; Solař, Miloš; Černý, Robert

    2017-11-01

    The design specifics of interior thermal insulation systems applied for historical building envelopes are described. The vapor-tight systems and systems based on capillary thermal insulation materials are taken into account as two basic options differing in building-physical considerations. The possibilities of hygrothermal analysis of renovated historical envelopes including laboratory methods, computer simulation techniques, and in-situ tests are discussed. It is concluded that the application of computational models for hygrothermal assessment of interior thermal insulation systems should always be performed with a particular care. On one hand, they present a very effective tool for both service life assessment and possible planning of subsequent reconstructions. On the other, the hygrothermal analysis of any historical building can involve quite a few potential uncertainties which may affect negatively the accuracy of obtained results.

  20. Perancangan InteriorInterior World Center” di Surabaya

    OpenAIRE

    Handojo, Renita Olivia

    2014-01-01

    Interior World Center is a place that contain all needs and matters related to world of interior. The main purpose of Interior World Center is to give access and facilitate people in fulfill their needs that concern and correspond with interior world, that is to say with combine and integrate several interior activity into one in corporated place. Inside of this building there are several commercial space related with the interior world. The commercial spaces will support each other in order ...

  1. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    International Nuclear Information System (INIS)

    Pennington, R S; Boothroyd, C B; König, S; Alpers, A; Dunin-Borkowski, R E

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality. When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize.

  2. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, S.; Alpers, A.

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  3. Statistical reconstruction for cosmic ray muon tomography.

    Science.gov (United States)

    Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J

    2007-08-01

    Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.

  4. Jet Vertex Charge Reconstruction

    CERN Document Server

    Nektarijevic, Snezana; The ATLAS collaboration

    2015-01-01

    A newly developed algorithm called the jet vertex charge tagger, aimed at identifying the sign of the charge of jets containing $b$-hadrons, referred to as $b$-jets, is presented. In addition to the well established track-based jet charge determination, this algorithm introduces the so-called \\emph{jet vertex charge} reconstruction, which exploits the charge information associated to the displaced vertices within the jet. Furthermore, the charge of a soft muon contained in the jet is taken into account when available. All available information is combined into a multivariate discriminator. The algorithm has been developed on jets matched to generator level $b$-hadrons provided by $t\\bar{t}$ events simulated at $\\sqrt{s}$=13~TeV using the full ATLAS detector simulation and reconstruction.

  5. Model-based image reconstruction in X-ray computed tomography

    NARCIS (Netherlands)

    Zbijewski, Wojciech Bartosz

    2006-01-01

    The thesis investigates the applications of iterative, statistical reconstruction (SR) algorithms in X-ray Computed Tomography. Emphasis is put on various aspects of system modeling in statistical reconstruction. Fundamental issues such as effects of object discretization and algorithm

  6. Industrial Computed Tomography using Proximal Algorithm

    KAUST Repository

    Zang, Guangming

    2016-04-14

    In this thesis, we present ProxiSART, a flexible proximal framework for robust 3D cone beam tomographic reconstruction based on the Simultaneous Algebraic Reconstruction Technique (SART). We derive the proximal operator for the SART algorithm and use it for minimizing the data term in a proximal algorithm. We show the flexibility of the framework by plugging in different powerful regularizers, and show its robustness in achieving better reconstruction results in the presence of noise and using fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a small number of projections.

  7. Fully automated reconstruction of three-dimensional vascular tree structures from two orthogonal views using computational algorithms and productionrules

    Science.gov (United States)

    Liu, Iching; Sun, Ying

    1992-10-01

    A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.

  8. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  9. On the Evaluation of Computational Results Obtained from Solving System of linear Equations With matlab The Dual affine Scalling interior Point

    International Nuclear Information System (INIS)

    Murfi, Hendri; Basaruddin, T.

    2001-01-01

    The interior point method for linear programming has gained extraordinary interest as an alternative to simplex method since Karmarkar presented a polynomial-time algorithm for linear programming based on interior point method. In implementation of the algorithm of this method, there are two important things that have impact heavily to performance of the algorithm; they are data structure and used method to solve linear equation system in the algorithm. This paper describes about solving linear equation system in variants of the algorithm called dual-affine scaling algorithm. Next, we evaluate experimentally results of some used methods, either direct method or iterative method. The experimental evaluation used Matlab

  10. Speeding up image reconstruction in computed tomography

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Computed tomography (CT) is a technique for imaging cross-sections of an object using X-ray measurements taken from different angles. In last decades a significant progress has happened there: today advanced algorithms allow fast image reconstruction and obtaining high-quality images even with missing or dirty data, modern detectors provide high resolution without increasing radiation dose, and high-performance multi-core computing devices are there to help us solving such tasks even faster. I will start with CT basics, then briefly present existing classes of reconstruction algorithms and their differences. After that I will proceed to employing distinctive architectural features of modern multi-core devices (CPUs and GPUs) and popular program interfaces (OpenMP, MPI, CUDA, OpenCL) for developing effective parallel realizations of image reconstruction algorithms. Decreasing full reconstruction time from long hours up to minutes or even seconds has a revolutionary impact in diagnostic medicine and industria...

  11. Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.

    Science.gov (United States)

    Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M

    2016-04-01

    To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.

  12. Image reconstruction from limited angle Compton camera data

    International Nuclear Information System (INIS)

    Tomitani, T.; Hirasawa, M.

    2002-01-01

    The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection. (author)

  13. Exact Reconstruction From Uniformly Attenuated Helical Cone-Beam Projections in SPECT

    International Nuclear Information System (INIS)

    Gullberg, Grant T.; Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.

    2008-01-01

    In recent years the development of cone-beam reconstruction algorithms has been an active research area in x-ray computed tomography (CT), and significant progress has been made in the advancement of algorithms. Theoretically exact and computationally efficient analytical algorithms can be found in the literature. However, in single photon emission computed tomography (SPECT), published cone-beam reconstruction algorithms are either approximate or involve iterative methods. The SPECT reconstruction problem is more complicated due to degradations in the imaging detection process, one of which is the effect of attenuation of gamma ray photons. Attenuation should be compensated for to obtain quantitative results. In this paper, an analytical reconstruction algorithm for uniformly attenuated cone-beam projection data is presented for SPECT imaging. The algorithm adopts the DBH method, a procedure consisting of differentiation and backprojection followed by a finite inverse cosh-weighted Hilbert transform. The significance of the proposed approach is that a selected region of interest can be reconstructed even with a detector with a reduced field of view. The algorithm is designed for a general trajectory. However, to validate the algorithm, a numerical study was performed using a helical trajectory. The implementation is efficient and the simulation result is promising

  14. Computer-assisted lung nodule volumetry from multi-detector row CT: Influence of image reconstruction parameters

    International Nuclear Information System (INIS)

    Honda, Osamu; Sumikawa, Hiromitsu; Johkoh, Takeshi; Tomiyama, Noriyuki; Mihara, Naoki; Inoue, Atsuo; Tsubamoto, Mitsuko; Natsag, Javzandulam; Hamada, Seiki; Nakamura, Hironobu

    2007-01-01

    Purpose: To investigate differences in volumetric measurement of pulmonary nodules caused by changing the reconstruction parameters for multi-detector row CT. Materials and methods: Thirty-nine pulmonary nodules less than 2 cm in diameter were examined by multi-slice CT. All nodules were solid, and located in the peripheral part of the lungs. The resultant 48 parameters images were reconstructed by changing slice thickness (1.25, 2.5, 3.75, or 5 mm), field of view (FOV: 10, 20, or 30 cm), algorithm (high-spatial frequency algorithm or low-spatial frequency algorithm) and reconstruction interval (reconstruction with 50% overlapping of the reconstructed slices or non-overlapping reconstruction). Volumetric measurements were calculated using commercially available software. The differences between nodule volumes were analyzed by the Kruskal-Wallis test and the Wilcoxon Signed-Ranks test. Results: The diameter of the nodules was 8.7 ± 2.7 mm on average, ranging from 4.3 to 16.4 mm. Pulmonary nodule volume did not change significantly with changes in slice thickness or FOV (p > 0.05), but was significantly larger with the high-spatial frequency algorithm than the low-spatial frequency algorithm (p < 0.05), except for one reconstruction parameter. The volumes determined by non-overlapping reconstruction were significantly larger than those of overlapping reconstruction (p < 0.05), except for a 1.25 mm thickness with 10 cm FOV with the high-spatial frequency algorithm, and 5 mm thickness. The maximum difference in measured volume was 16% on average between the 1.25 mm slice thickness/10 cm FOV/high-spatial frequency algorithm parameters and overlapping reconstruction. Conclusion: Volumetric measurements of pulmonary nodules differ with changes in the reconstruction parameters, with a tendency toward larger volumes in high-spatial frequency algorithm and non-overlapping reconstruction compared to the low-spatial frequency algorithm and overlapping reconstruction

  15. Interval-based reconstruction for uncertainty quantification in PET

    Science.gov (United States)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  16. Ray tracing reconstruction investigation for C-arm tomosynthesis

    Science.gov (United States)

    Malalla, Nuhad A. Y.; Chen, Ying

    2016-04-01

    C-arm tomosynthesis is a three dimensional imaging technique. Both x-ray source and the detector are mounted on a C-arm wheeled structure to provide wide variety of movement around the object. In this paper, C-arm tomosynthesis was introduced to provide three dimensional information over a limited view angle (less than 180o) to reduce radiation exposure and examination time. Reconstruction algorithms based on ray tracing method such as ray tracing back projection (BP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were developed for C-arm tomosynthesis. C-arm tomosynthesis projection images of simulated spherical object were simulated with a virtual geometric configuration with a total view angle of 40 degrees. This study demonstrated the sharpness of in-plane reconstructed structure and effectiveness of removing out-of-plane blur for each reconstruction algorithms. Results showed the ability of ray tracing based reconstruction algorithms to provide three dimensional information with limited angle C-arm tomosynthesis.

  17. Vertex Reconstruction in the ATLAS Experiment at the LHC

    CERN Document Server

    Bouhova-Thacker, E; The ATLAS collaboration; Kostyukhin, V; Liebig, W; Limper, M; Piacquadio, G; Lichard, P; Weiser, C; Wildauer, A

    2009-01-01

    In the harsh environment of the Large Hadron Collider at CERN (design luminosity of $10^{34}$ cm$^{-2}$ s$^{-1}$) efficient reconstruction of vertices is crucial for many physics analyses. Described in this paper are the strategies for vertex reconstruction used in the ATLAS experiment and their implementation in the software framework Athena. The algorithms for the reconstruction of primary and secondary vertices as well as for finding of photon conversions and vertex reconstruction in jets are described. A special emphasis is made on the vertex fitting with application of additional constraints. The implementation of mentioned algorithms follows a very modular design based on object-oriented C++ and use of abstract interfaces. The user-friendly concept allows event reconstruction and physics analyses to compare and optimize their choice among different vertex reconstruction strategies. The performance of implemented algorithms has been studied on a variety of Monte Carlo samples and results are presented.

  18. Fast parallel algorithm for CT image reconstruction.

    Science.gov (United States)

    Flores, Liubov A; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2012-01-01

    In X-ray computed tomography (CT) the X rays are used to obtain the projection data needed to generate an image of the inside of an object. The image can be generated with different techniques. Iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions and from a small number of projections. Their use may be important in portable scanners for their functionality in emergency situations. However, in practice, these methods are not widely used due to the high computational cost of their implementation. In this work we analyze iterative parallel image reconstruction with the Portable Extensive Toolkit for Scientific computation (PETSc).

  19. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  20. Interior Point Methods for Large-Scale Nonlinear Programming

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2005-01-01

    Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  1. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  2. Photoelectron holography with improved image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Matsushita, Tomohiro, E-mail: matusita@spring8.or.j [Japan Synchrotron Radiation Research Institute (JASRI), SPring-8, 1-1-1 Kouto, Sayo-cho, Sayo-gun Hyogo 679-5198 (Japan); Matsui, Fumihiko; Daimon, Hiroshi [Nara Institute of Science and Technology (NAIST), 8916-5 Takayama, Ikoma, Nara 630-0192 (Japan); Hayashi, Kouichi [Institute for Materials Research, Tohoku University, Sendai 980-8577 (Japan)

    2010-05-15

    Electron holography is a type of atomic structural analysis, and it has unique features such as element selectivity and the ability to analyze the structure around an impurity in a crystal. In this paper, we introduce the measurement system, electron holograms, a theory for the recording process of an electron hologram, and a theory for the reconstruction algorithm. We describe photoelectron holograms, Auger electron holograms, and the inverse mode of an electron hologram. The reconstruction algorithm, scattering pattern extraction algorithm (SPEA), the SPEA with maximum entropy method (SPEA-MEM), and SPEA-MEM with translational operation are also described.

  3. Photoelectron holography with improved image reconstruction

    International Nuclear Information System (INIS)

    Matsushita, Tomohiro; Matsui, Fumihiko; Daimon, Hiroshi; Hayashi, Kouichi

    2010-01-01

    Electron holography is a type of atomic structural analysis, and it has unique features such as element selectivity and the ability to analyze the structure around an impurity in a crystal. In this paper, we introduce the measurement system, electron holograms, a theory for the recording process of an electron hologram, and a theory for the reconstruction algorithm. We describe photoelectron holograms, Auger electron holograms, and the inverse mode of an electron hologram. The reconstruction algorithm, scattering pattern extraction algorithm (SPEA), the SPEA with maximum entropy method (SPEA-MEM), and SPEA-MEM with translational operation are also described.

  4. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  5. Geometric reconstruction methods for electron tomography

    International Nuclear Information System (INIS)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand

  6. Super resolution reconstruction of μ-CT image of rock sample using neighbour embedding algorithm

    Science.gov (United States)

    Wang, Yuzhu; Rahman, Sheik S.; Arns, Christoph H.

    2018-03-01

    X-ray computed tomography (μ-CT) is considered to be the most effective way to obtain the inner structure of rock sample without destructions. However, its limited resolution hampers its ability to probe sub-micro structures which is critical for flow transportation of rock sample. In this study, we propose an innovative methodology to improve the resolution of μ-CT image using neighbour embedding algorithm where low frequency information is provided by μ-CT image itself while high frequency information is supplemented by high resolution scanning electron microscopy (SEM) image. In order to obtain prior for reconstruction, a large number of image patch pairs contain high- and low- image patches are extracted from the Gaussian image pyramid generated by SEM image. These image patch pairs contain abundant information about tomographic evolution of local porous structures under different resolution spaces. Relying on the assumption of self-similarity of porous structure, this prior information can be used to supervise the reconstruction of high resolution μ-CT image effectively. The experimental results show that the proposed method is able to achieve the state-of-the-art performance.

  7. Current constrained voltage scaled reconstruction (CCVSR) algorithm for MR-EIT and its performance with different probing current patterns

    International Nuclear Information System (INIS)

    Birguel, Oezlem; Eyueboglu, B Murat; Ider, Y Ziya

    2003-01-01

    Conventional injected-current electrical impedance tomography (EIT) and magnetic resonance imaging (MRI) techniques can be combined to reconstruct high resolution true conductivity images. The magnetic flux density distribution generated by the internal current density distribution is extracted from MR phase images. This information is used to form a fine detailed conductivity image using an Ohm's law based update equation. The reconstructed conductivity image is assumed to differ from the true image by a scale factor. EIT surface potential measurements are then used to scale the reconstructed image in order to find the true conductivity values. This process is iterated until a stopping criterion is met. Several simulations are carried out for opposite and cosine current injection patterns to select the best current injection pattern for a 2D thorax model. The contrast resolution and accuracy of the proposed algorithm are also studied. In all simulation studies, realistic noise models for voltage and magnetic flux density measurements are used. It is shown that, in contrast to the conventional EIT techniques, the proposed method has the capability of reconstructing conductivity images with uniform and high spatial resolution. The spatial resolution is limited by the larger element size of the finite element mesh and twice the magnetic resonance image pixel size

  8. Acceleration of iterative tomographic reconstruction using graphics processors

    International Nuclear Information System (INIS)

    Belzunce, M.A.; Osorio, A.; Verrastro, C.A.

    2009-01-01

    Using iterative algorithms for image reconstruction in 3 D Positron Emission Tomography has shown to produce images with better quality than analytical methods. How ever, these algorithms are computationally expensive. New Graphic Processor Units (GPU) provides high performance at low cost and also programming tools that make possible to execute parallel algorithms easily in scientific applications. In this work, we try to achieve an acceleration of image reconstruction algorithms in 3 D PET by using a GPU. A parallel implementation of the algorithm ML-EM 3 D was developed using Siddon algorithm as Projector and Back-projector. Results show that accelerations of more than one order of magnitude can be achieved, keeping similar image quality. (author)

  9. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  10. GEODESIC RECONSTRUCTION, SADDLE ZONES & HIERARCHICAL SEGMENTATION

    Directory of Open Access Journals (Sweden)

    Serge Beucher

    2011-05-01

    Full Text Available The morphological reconstruction based on geodesic operators, is a powerful tool in mathematical morphology. The general definition of this reconstruction supposes the use of a marker function f which is not necessarily related to the function g to be built. However, this paper deals with operations where the marker function is defined from given characteristic regions of the initial function f, as it is the case, for instance, for the extrema (maxima or minima but also for the saddle zones. Firstly, we show that the intuitive definition of a saddle zone is not easy to handle, especially when digitised images are involved. However, some of these saddle zones (regional ones also called overflow zones can be defined, this definition providing a simple algorithm to extract them. The second part of the paper is devoted to the use of these overflow zones as markers in image reconstruction. This reconstruction provides a new function which exhibits a new hierarchy of extrema. This hierarchy is equivalent to the hierarchy produced by the so-called waterfall algorithm. We explain why the waterfall algorithm can be achieved by performing a watershed transform of the function reconstructed by its initial watershed lines. Finally, some examples of use of this hierarchical segmentation are described.

  11. Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.

    Science.gov (United States)

    Li, Liang; Wang, Bigong; Wang, Ge

    2016-01-01

    In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.

  12. Application of iterative reconstruction in dynamic studies

    International Nuclear Information System (INIS)

    Meikle, S.R.

    1998-01-01

    Full text: The conventional approach to analysing dynamic tomographic data (SPECT or PET) is to reconstruct projections corresponding to each time interval separately and then fit a suitable tracer kinetic model to the dynamic sequence (method 1 ) . This approach assumes that the tracer distribution remains static during any given time interval and, for practical reasons, filtered back-projection (FBP) is the preferred reconstruction algorithm. However, alternative approaches exist which lend themselves to iterative algorithms, such as EM. One approach is to fit the model directly to the projection data, followed by EM reconstruction of the parameter estimates (method 2). This requires that the tracer model can be expressed as a linear function of the unknown model parameters. A third alternative is to incorporate the tracer model into the reconstruction algorithm (method 3). Such an extension was described during the early development of the EM algorithm, referred to as the EM parametric image reconstruction algorithm (EM-PIRA). We have investigated these various strategies for analysing dynamic data and their relative pros and cons. Tracer modelling was performed using a general model, referred to as spectral analysis, which makes no restriction on the number of physiological compartments and satisfies the linearity requirement of method 2. A kinetic software phantom was created and used to test the convergence and noise properties of the different approaches. In summary, method 2 is the most practical as it reduces the number of reconstructions by at least an order of magnitude and provides improved signal-to-noise ratios compared with method 1. EM-PIRA allows greater flexibility in the choice of parametric images and appears to have a regularising effect on convergence. Methods 2 and 3 are also better suited to dynamic scanning with a rotating camera, as they can potentially account for changes in tracer distribution between projections

  13. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    Science.gov (United States)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  14. An efficient algorithm for MR image reconstruction and compression

    International Nuclear Information System (INIS)

    Wang, Hang; Rosenfeld, D.; Braun, M.; Yan, Hong

    1992-01-01

    In magnetic resonance imaging (MRI), the original data are sampled in the spatial frequency domain. The sampled data thus constitute a set of discrete Fourier transform (DFT) coefficients. The image is usually reconstructed by taking inverse DFT. The image data may then be efficiently compressed using the discrete cosine transform (DCT). A method of using DCT to treat the sampled data is presented which combines two procedures, image reconstruction and data compression. This method may be particularly useful in medical picture archiving and communication systems where both image reconstruction and compression are important issues. 11 refs., 3 figs

  15. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    International Nuclear Information System (INIS)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-01-01

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR™) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR™. Empirically derived dose reduction limits were established for ASiR™ for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%–100% ASiR™ blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR™ implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR™ reconstruction to maintain noise equivalence of the 0% ASiR™ image. Results: The ASiR™ algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR™ reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR™ presented a more smoothed appearance than the pre-ASiR™ 100% FBP image. Finally, relative to non-ASiR™ images with 100% of standard dose across the

  16. Performance analysis of a cellular automation algorithm for the solution of the track reconstruction problem on a manycore server at LIT, JINR

    International Nuclear Information System (INIS)

    Kulakov, I.S.; Baginyan, S.A.; Ivanov, V.V.; Kisel', P.I.

    2013-01-01

    The results of the tests for the tracks reconstruction efficiency, the speed of the algorithm and its scalability with respect to the number of cores of the server with two Intel Xeon E5640 CPUs (in total 8 physical or 16 logical cores) are presented and discussed

  17. Few-view image reconstruction with dual dictionaries

    International Nuclear Information System (INIS)

    Lu Yang; Zhao Jun; Wang Ge

    2012-01-01

    In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. (paper)

  18. Performance of algorithms that reconstruct missing transverse momentum in √(s) = 8 TeV proton-proton collisions in the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G. [CPPM, Aix-Marseille Univ. et CNRS/IN2P3, Marseille (France); Abbott, B. [Oklahoma Univ., Norman, OK (United States). Homer L. Dodge Dept. of Physics and Astronomy; Abdallah, J. [Iowa Univ.,Iowa City, IA (United States); Collaboration: ATLAS Collaboration; and others

    2017-04-15

    The reconstruction and calibration algorithms used to calculate missing transverse momentum (E{sub T}{sup miss}) with the ATLAS detector exploit energy deposits in the calorimeter and tracks reconstructed in the inner detector as well as the muon spectrometer. Various strategies are used to suppress effects arising from additional proton-proton interactions, called pileup, concurrent with the hard-scatter processes. Tracking information is used to distinguish contributions from the pileup interactions using their vertex separation along the beam axis. The performance of the E{sub T}{sup miss} reconstruction algorithms, especially with respect to the amount of pileup, is evaluated using data collected in proton-proton collisions at a centre-of-mass energy of 8 TeV during 2012, and results are shown for a data sample corresponding to an integrated luminosity of 20.3 fb{sup -1}. The simulation and modelling of E{sub T}{sup miss} in events containing a Z boson decaying to two charged leptons (electrons or muons) or a W boson decaying to a charged lepton and a neutrino are compared to data. The acceptance for different event topologies, with and without high transverse momentum neutrinos, is shown for a range of threshold criteria for E{sub T}{sup miss}, and estimates of the systematic uncertainties in the E{sub T}{sup miss} measurements are presented. (orig.)

  19. Structure Assisted Compressed Sensing Reconstruction of Undersampled AFM Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Arildsen, Thomas; Larsen, Torben

    2017-01-01

    reconstruction algorithms that enables the use of our proposed structure model in the reconstruction process. Through a large set of reconstructions, the general reconstruction capability improvement achievable using our structured model is shown both quantitatively and qualitatively. Specifically, our...

  20. Reconstruction of Single-Grain Orientation Distribution Functions for Crystalline Materials

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Sørensen, Henning Osholm; Sükösd, Zsuzsanna

    2009-01-01

    for individual grains of the material in consideration. We study two iterative large-scale reconstruction algorithms, the algebraic reconstruction technique (ART) and conjugate gradients for least squares (CGLS), and demonstrate that right preconditioning is necessary in both algorithms to provide satisfactory...