Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images
Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Jørgensen, Jakob H; Pan, Xiaochuan
2011-01-01
Breast X-ray CT imaging is being considered in screening as an extension to mammography. As a large fraction of the population will be exposed to radiation, low-dose imaging is essential. Iterative image reconstruction based on solving an optimization problem, such as Total-Variation minimization, shows potential for reconstruction from sparse-view data. For iterative methods it is important to ensure convergence to an accurate solution, since important image features, such as presence of microcalcifications indicating breast cancer, may not be visible in a non-converged reconstruction, and this can have clinical significance. To prevent excessively long computational times, which is a practical concern for the large image arrays in CT, it is desirable to keep the number of iterations low, while still ensuring a sufficiently accurate reconstruction for the specific imaging task. This motivates the study of accurate convergence criteria for iterative image reconstruction. In simulation studies with a realistic...
Jørgensen, Jakob Heide; Sidky, Emil Y.; Pan, Xiaochuan
2011-01-01
, shows potential for reconstruction from sparse-view data. For iterative methods it is important to ensure convergence to an accurate solution, since important diagnostic image features, such as presence of microcalcifications indicating breast cancer, may not be visible in a non-converged reconstruction......, and this can have clinical significance. To prevent excessively long computational times, which is a practical concern for the large image arrays in CT, it is desirable to keep the number of iterations low, while still ensuring a sufficiently accurate reconstruction for the specific imaging task....... This motivates the study of accurate convergence criteria for iterative image reconstruction. In simulation studies with a realistic breast phantom with microcalcifications we investigate the issue of ensuring sufficiently converged solution for reliable reconstruction. Our results show that it can...
In Situ Casting and Imaging of the Rat Airway Tree for Accurate 3D Reconstruction
Jacob, Richard E.; Colby, Sean M.; Kabilan, Senthil; Einstein, Daniel R.; Carson, James P.
2014-01-01
The use of anatomically accurate, animal-specific airway geometries is important for understanding and modeling the physiology of the respiratory system. One approach for acquiring detailed airway architecture is to create a bronchial cast of the conducting airways. However, typical casting procedures either do not faithfully preserve the in vivo branching angles or produce rigid casts that when removed for imaging are fragile and thus easily damaged. We address these problems by creating an in situ bronchial cast of the conducting airways in rats that can be subsequently imaged in situ using 3D micro-CT imaging. We also demonstrate that deformations in airway branch angles resulting from the casting procedure are small, and that these angle deformations can be reversed through an interactive adjustment of the segmented cast geometry. Animal work was approved by the Institutional Animal Care and Use Committee of Pacific Northwest National Laboratory. PMID:23786464
Giannoglou, George D.; Chatzizisis, Yiannis S.; Sianos, George; Tsikaderis, Dimitrios; Matakos, Antonis; Koutkias, Vassilios; Diamantopoulos, Panagiotis; Maglaveras, Nicos; Parcharidis, George E.; Louridas, George E.
2006-12-01
In conventional intravascular ultrasound (IVUS)-based three-dimensional (3D) reconstruction of human coronary arteries, IVUS images are arranged linearly generating a straight vessel volume. However, with this approach real vessel curvature is neglected. To overcome this limitation an imaging method was developed based on integration of IVUS and biplane coronary angiography (BCA). In 17 coronary arteries from nine patients, IVUS and BCA were performed. From each angiographic projection, a single end-diastolic frame was selected and in each frame the IVUS catheter was interactively detected for the extraction of 3D catheter path. Ultrasound data was obtained with a sheath-based catheter and recorded on S-VHS videotape. S-VHS data was digitized and lumen and media-adventitia contours were semi-automatically detected in end-diastolic IVUS images. Each pair of contours was aligned perpendicularly to the catheter path and rotated in space by implementing an algorithm based on Frenet-Serret rules. Lumen and media-adventitia contours were interpolated through generation of intermediate contours creating a real 3D lumen and vessel volume, respectively. The absolute orientation of the reconstructed lumen was determined by back-projecting it onto both angiographic planes and comparing the projected lumen with the actual angiographic lumen. In conclusion, our method is capable of performing rapid and accurate 3D reconstruction of human coronary arteries in vivo. This technique can be utilized for reliable plaque morphometric, geometrical and hemodynamic analyses.
Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images
Bert Vandeghinste; Roel Van Holen; Christian Vanhove; Filip De Vos; Stefaan Vandenberghe; Steven Staelens
2014-01-01
This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-singlephoton emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using (99m) Tc and In-111, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correcti...
Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images.
Vandeghinste, Bert; Van Holen, Roel; Vanhove, Christian; De Vos, Filip; Vandenberghe, Stefaan; Staelens, Steven
2014-01-01
This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correction, (2) computed tomography-based attenuation correction, (3) resolution recovery, and (4) edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA) phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n = 10 kidneys) and [111In]octreotide in mice (n = 6) inoculated with a xenograft neuroendocrine tumor (NCI-H727). The measured energy resolution is 5.3% for 140.51 keV (99mTc), 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In). For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with 99mTc-dimercaptosuccinic acid ([99mTc]DMSA) is reduced from 16.2 ± 2.8% to -0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [99mTc]DMSA and [111In]octreotide.
Use of a Ray-Based Reconstruction Algorithm to Accurately Quantify Preclinical MicroSPECT Images
Bert Vandeghinste
2014-06-01
Full Text Available This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT. This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1 scatter correction, (2 computed tomography-based attenuation correction, (3 resolution recovery, and (4 edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n = 10 kidneys and [111In]octreotide in mice (n = 6 inoculated with a xenograft neuroendocrine tumor (NCI-H727. The measured energy resolution is 5.3% for 140.51 keV (99mTc, 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In. For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with “mTc-dimercaptosuccinic acid ([99mTc]DMSA is reduced from 16.2 ± 2.8% to −0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [”mTc]DMSA and [111In]octreotide.
Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He
2016-06-01
An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant
Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT
Sidky, Emil Y; Pan, Xiaochuan
2009-01-01
In practical applications of tomographic imaging, there are often challenges for image reconstruction due to under-sampling and insufficient data. In computed tomography (CT), for example, image reconstruction from few views would enable rapid scanning with a reduced x-ray dose delivered to the patient. Limited-angle problems are also of practical significance in CT. In this work, we develop and investigate an iterative image reconstruction algorithm based on the minimization of the image total variation (TV) that applies to divergent-beam CT. Numerical demonstrations of our TV algorithm are performed with various insufficient data problems in fan-beam CT. The TV algorithm can be generalized to cone-beam CT as well as other tomographic imaging modalities.
Accurate reconstruction of digital holography using frequency domain zero padding
Shin, Jun Geun; Kim, Ju Wan; Lee, Jae Hwi; Lee, Byeong Ha
2017-04-01
We propose an image reconstruction method of digital holography for getting more accurate reconstruction. Digital holography provides both the light amplitude and the phase of a specimen through recording the interferogram. Since the Fresenl diffraction can be efficiently implemented by the Fourier transform, zero padding technique can be applied to obtain more accurate information. In this work, we report the method of frequency domain zero padding (FDZP). Both in computer-simulation and in experiment made with a USAF 1951 resolution chart and target, the FDZD gave the more accurate rconstruction images. Even though, the FDZD asks more processing time, with the help of graphics processing unit (GPU), it can find good applications in digital holography for 3-D profile imaging.
Speed-of-sound compensated photoacoustic tomography for accurate imaging
Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang
2012-01-01
In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
Consistent Reconstruction of Cortical Surfaces from Longitudinal Brain MR Images
Li, Gang; Nie, Jingxin; Shen, Dinggang
2011-01-01
Accurate and consistent reconstruction of cortical surfaces from longitudinal human brain MR images is of great importance in studying subtle morphological changes of the cerebral cortex. This paper presents a new deformable surface method for consistent and accurate reconstruction of inner, central and outer cortical surfaces from longitudinal MR images. Specifically, the cortical surfaces of the group-mean image of all aligned longitudinal images of the same subject are first reconstructed ...
IMAGE RECONSTRUCTION AND OBJECT CLASSIFICATION IN CT IMAGING SYSTEM
张晓明; 蒋大真; 等
1995-01-01
By obtaining a feasible filter function,reconstructed images can be got with linear interpolation and filtered backoprojection techniques.Considering the gray and spatial correlation neighbour informations of each pixel,a new supervised classification method is put forward for the reconstructed images,and an experiment with noise image is done,the result shows that the method is feasible and accurate compared with ideal phantoms.
Exercises in PET Image Reconstruction
Nix, Oliver
These exercises are complementary to the theoretical lectures about positron emission tomography (PET) image reconstruction. They aim at providing some hands on experience in PET image reconstruction and focus on demonstrating the different data preprocessing steps and reconstruction algorithms needed to obtain high quality PET images. Normalisation, geometric-, attenuation- and scatter correction are introduced. To explain the necessity of those some basics about PET scanner hardware, data acquisition and organisation are reviewed. During the course the students use a software application based on the STIR (software for tomographic image reconstruction) library 1,2 which allows them to dynamically select or deselect corrections and reconstruction methods as well as to modify their most important parameters. Following the guided tutorial, the students get an impression on the effect the individual data precorrections have on image quality and what happens if they are forgotten. Several data sets in sinogram format are provided, such as line source data, Jaszczak phantom data sets with high and low statistics and NEMA whole body phantom data. The two most frequently used reconstruction algorithms in PET image reconstruction, filtered back projection (FBP) and the iterative OSEM (ordered subset expectation maximation) approach are used to reconstruct images. The exercise should help the students gaining an understanding what the reasons for inferior image quality and artefacts are and how to improve quality by a clever choice of reconstruction parameters.
Precision Pointing Reconstruction and Geometric Metadata Generation for Cassini Images
French, R. S.; Showalter, M. R.; Gordon, M. K.
2017-06-01
We are reconstructing accurate pointing for 400,000 images taken by Cassini at Saturn. The results will be provided to the public along with per-pixel metadata describing precise image contents such as geographical location and viewing geometry.
Consistent Reconstruction of Cortical Surfaces from Longitudinal Brain MR Images
Li, Gang; Nie, Jingxin; Wu, Guorong; Wang, Yaping; Shen, Dinggang
2011-01-01
Accurate and consistent reconstruction of cortical surfaces from longitudinal human brain MR images is of great importance in studying longitudinal subtle change of the cerebral cortex. This paper presents a novel deformable surface method for consistent and accurate reconstruction of inner, central and outer cortical surfaces from longitudinal brain MR images. Specifically, the cortical surfaces of the group-mean image of all aligned longitudinal images of the same subject are first reconstr...
Image Interpolation Through Surface Reconstruction
ZHANG Ling; LI Xue-mei
2013-01-01
Reconstructing an HR (high-resolution) image which preserves the image intrinsic structures from its LR ( low-resolution) counterpart is highly challenging. This paper proposes a new surface reconstruction algorithm applied to image interpolation. The interpolation surface for the whole image is generated by putting all the quadratic polynomial patches together. In order to eliminate the jaggies of the edge, a new weight function containing edge information is incorporated into the patch reconstruction procedure as a constraint. Extensive experimental results demonstrate that our method produces better results across a wide range of scenes in terms of both quantitative evaluation and subjective visual quality.
Robust and accurate multi-view reconstruction by prioritized matching
Ylimaki, Markus; Kannala, Juho; Holappa, Jukka;
2012-01-01
a prioritized matching method which expands the most promising seeds first. The output of the method is a three-dimensional point cloud. Unlike previous correspondence growing approaches our method allows to use the best-first matching principle in the generic multi-view stereo setting with arbitrary number...... of input images. Our experiments show that matching the most promising seeds first provides very robust point cloud reconstructions efficiently with just a single expansion step. A comparison to the current state-of-the-art shows that our method produces reconstructions of similar quality but significantly...
Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis
Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan
2012-01-01
solutions can aid in iterative image reconstruction algorithm design. This issue is particularly acute for iterative image reconstruction in Digital Breast Tomosynthesis (DBT), where the corresponding data model IS particularly poorly conditioned. The impact of this poor conditioning is that iterative......Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization....... Math. Imag. Vol. 40, pgs 120-145) and apply it to iterative image reconstruction in DBT....
GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity
Leonard, Adrienne; Starck, Jean-Luc
2013-01-01
We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one dimension aimed at applying compressive sensing theory to the inversion of gravitational lensing measurements to recover 3D density maps. Using the assumption that the density can be represented sparsely in our chosen basis - 2D transverse wavelets and 1D line of sight dirac functions - we show that clusters of galaxies can be identified and accurately localised and characterised using this method. Throughout, we use simulated data consistent with the quality currently attainable in large surveys. We present a thorough statistical analysis of the errors and biases in both the redshifts of detected structures and their amplitudes. The GLIMPSE method is able to produce reconstructions at significantly higher resolution than the input data; in this paper we show reco...
A fast and accurate algorithm for diploid individual haplotype reconstruction.
Wu, Jingli; Liang, Binbin
2013-08-01
Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.
Tomographic image reconstruction using training images
Soltani, Sara; Andersen, Martin Skovgaard; Hansen, Per Christian
2017-01-01
the framework of sparse learning as a regularized non-negative matrix factorization. Incorporating the dictionary as a prior in a convex reconstruction problem, we then find an approximate solution with a sparse representation in the dictionary. The dictionary is applied to non-overlapping patches of the image......We describe and examine an algorithm for tomographic image reconstruction where prior knowledge about the solution is available in the form of training images. We first construct a non-negative dictionary based on prototype elements from the training images; this problem is formulated within...
Spatially adaptive regularized iterative high-resolution image reconstruction algorithm
Lim, Won Bae; Park, Min K.; Kang, Moon Gi
2000-12-01
High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The
Image reconstruction techniques for high resolution human brain PET imaging
Comtat, C.; Bataille, F.; Sureau, F. [Service Hospitalier Frederic Joliot (CEA/DSV/DRM), 91 - Orsay (France)
2006-07-01
High resolution PET imaging is now a well established technique not only for small animal, but also for human brain studies. The ECAT HRRT brain PET scanner(Siemens Molecular Imaging) is characterized by an effective isotropic spatial resolution of 2.5 mm, about a factor of 2 better than for state-of-the-art whole-body clinical PET scanners. Although the absolute sensitivity of the HRRT (6.5 %) for point source in the center of the field-of-view is increased relative to whole-body scanner (typically 4.5 %) thanks to a larger co-polar aperture, the sensitivity in terms of volumetric resolution (75 (m{sup 3} at best for whole-body scanners and 16 (m{sup 3} for t he HRRT) is much lower. This constraint has an impact on the performance of image reconstruction techniques, in particular for dynamic studies. Standard reconstruction methods used with clinical whole-body PET scanners are not optimal for this application. Specific methods had to be developed, based on fully 3D iterative techniques. Different refinements can be added in the reconstruction process to improve image quality: more accurate modeling of the acquisition system, more accurate modeling of the statistical properties of the acquired data, anatomical side information to guide the reconstruction . We will present the performances these added developments for neuronal imaging in humans. (author)
Modern methods of image reconstruction.
Puetter, R. C.
The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.
Computational Imaging for VLBI Image Reconstruction
Bouman, Katherine L; Zoran, Daniel; Fish, Vincent L; Doeleman, Sheperd S; Freeman, William T
2015-01-01
Very long baseline interferometry (VLBI) is a technique for imaging celestial radio emissions by simultaneously observing a source from telescopes distributed across Earth. The challenges in reconstructing images from fine angular resolution VLBI data are immense. The data is extremely sparse and noisy, thus requiring statistical image models such as those designed in the computer vision community. In this paper we present a novel Bayesian approach for VLBI image reconstruction. While other methods require careful tuning and parameter selection for different types of images, our method is robust and produces good results under different settings such as low SNR or extended emissions. The success of our method is demonstrated on realistic synthetic experiments as well as publicly available real data. We present this problem in a way that is accessible to members of the computer vision community, and provide a dataset website (vlbiimaging.csail.mit.edu) to allow for controlled comparisons across algorithms. Thi...
Accurately approximating algebraic tomographic reconstruction by filtered backprojection
Pelt, D.M.; Batenburg, K.J.; King, M.; Glick, S.; Mueller, K.
2015-01-01
In computed tomography, algebraic reconstruction methods tend to produce reconstructions with higher quality than analytical methods when presented with limited and noisy projection data. The high computational requirements of algebraic methods, however, limit their usefulness in
Accurately approximating algebraic tomographic reconstruction by filtered backprojection
D.M. Pelt (Daniel); K.J. Batenburg (Joost); M. King; S. Glick; K. Mueller
2015-01-01
htmlabstractIn computed tomography, algebraic reconstruction methods tend to produce reconstructions with higher quality than analytical methods when presented with limited and noisy projection data. The high computational requirements of algebraic methods, however, limit
Spectral Reconstruction for Obtaining Virtual Hyperspectral Images
Perez, G. J. P.; Castro, E. C.
2016-12-01
Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.
A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures
Mangipudi, K.R., E-mail: mangipudi@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Radisch, V., E-mail: vradisch@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Holzer, L., E-mail: holz@zhaw.ch [Züricher Hochschule für Angewandte Wissenschaften, Institute of Computational Physics, Wildbachstrasse 21, CH-8400 Winterthur (Switzerland); Volkert, C.A., E-mail: volkert@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany)
2016-04-15
We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. - Highlights: • FIB nanotomography of nanoporous structure with features sizes of ∼40 nm or less. • Accurate determination of individual slice thickness with subpixel precision. • The method preserves surface topography. • Quantitative 3D microstructural analysis of materials with open porosity.
Towards an accurate volume reconstruction in atom probe tomography.
Beinke, Daniel; Oberdorfer, Christian; Schmitz, Guido
2016-06-01
An alternative concept for the reconstruction of atom probe data is outlined. It is based on the calculation of realistic trajectories of the evaporated ions in a recursive refinement process. To this end, the electrostatic problem is solved on a Delaunay tessellation. To enable the trajectory calculation, the order of reconstruction is inverted with respect to previous reconstruction schemes: the last atom detected is reconstructed first. In this way, the emitter shape, which controls the trajectory, can be defined throughout the duration of the reconstruction. A proof of concept is presented for 3D model tips, containing spherical precipitates or embedded layers of strongly contrasting evaporation thresholds. While the traditional method following Bas et al. generates serious distortions in these cases, a reconstruction with the proposed electrostatically informed approach improves the geometry of layers and particles significantly.
Comprehensive Use of Curvature For Robust And Accurate Online Surface Reconstruction.
Lefloch, Damien; Kluge, Markus; Sarbolandi, Hamed; Weyrich, Tim; Kolb, Andreas
2017-01-05
Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole.
Speed-of-sound compensated photoacoustic tomography for accurate imaging
Jose, J.; Willemink, G.H.; Steenbergen, W.; Leeuwen, van A.G.J.M.; Manohar, S.
2012-01-01
Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors pres
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Reconstruction Formulas for Photoacoustic Sectional Imaging
Elbau, Peter; Schulze, Rainer
2011-01-01
The literature on reconstruction formulas for photoacoustic tomography (PAT) is vast. The various reconstruction formulas differ by used measurement devices and geometry on which the data are sampled. In standard photoacoustic imaging (PAI), the object under investigation is illuminated uniformly. Recently, sectional photoacoustic imaging techniques, using focusing techniques for initializing and measuring the pressure along a plane, appeared in the literature. This paper surveys existing and provides novel exact reconstruction formulas for sectional photoacoustic imaging.
Reconstructing HST Images of Asteroids
Storrs, A. D.; Bank, S.; Gerhardt, H.; Makhoul, K.
2003-12-01
We present reconstructions of images of 22 large main belt asteroids that were observed by Hubble Space Telescope with the Wide-Field/Planetary cameras. All images were restored with the MISTRAL program (Mugnier, Fusco, and Conan 2003) at enhanced spatial resolution. This is possible thanks to the well-studied and stable point spread function (PSF) on HST. We present some modeling of this process and determine that the Strehl ratio for WF/PC (aberrated) images can be improved to 130 ratio of 80 We will report sizes, shapes, and albedos for these objects, as well as any surface features. Images taken with the WFPC-2 instrument were made in a variety of filters so that it should be possible to investigate changes in mineralogy across the surface of the larger asteroids in a manner similar to that done on 4 Vesta by Binzel et al. (1997). Of particular interest are a possible water of hydration feature on 1 Ceres, and the non-observation of a constriction or gap between the components of 216 Kleopatra. Reduction of this data was aided by grant HST-GO-08583.08A from the Space Telescope Science Institute. References: Mugnier, L.M., T. Fusco, and J.-M. Conan, 2003. JOSA A (submitted) Binzel, R.P., Gaffey, M.J., Thomas, P.C., Zellner, B.H., Storrs, A.D., and Wells, E.N. 1997. Icarus 128 pp. 95-103
Accurate reconstruction of insertion-deletion histories by statistical phylogenetics.
Oscar Westesson
Full Text Available The Multiple Sequence Alignment (MSA is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history, it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes.
Fast and accurate generation method of PSF-based system matrix for PET reconstruction
Sun, Xiao-Li; Liu, Shuang-Quan; Yun, Ming-Kai; Li, Dao-Wu; Gao, Juan; Li, Mo-Han; Chai, Pei; Tang, Hao-Hui; Zhang, Zhi-Ming; Wei, Long
2017-04-01
This work investigates the positional single photon incidence response (P-SPIR) to provide an accurate point spread function (PSF)-contained system matrix and its incorporation within the image reconstruction framework. Based on the Geant4 Application for Emission Tomography (GATE) simulation, P-SPIR theory takes both incidence angle and incidence position of the gamma photon into account during crystal subdivision, instead of only taking the former into account, as in single photon incidence response (SPIR). The response distribution obtained in this fashion was validated using Monte Carlo simulations. In addition, two-block penetration and normalization of the response probability are introduced to improve the accuracy of the PSF. With the incorporation of the PSF, the homogenization model is then analyzed to calculate the spread distribution of each line-of-response (LOR). A primate PET scanner, Eplus-260, developed by the Institute of High Energy Physics, Chinese Academy of Sciences (IHEP), was employed to evaluate the proposed method. The reconstructed images indicate that the P-SPIR method can effectively mitigate the depth-of-interaction (DOI) effect, especially at the peripheral area of field-of-view (FOV). Furthermore, the method can be applied to PET scanners with any other structures and list-mode data format with high flexibility and efficiency. Supported by National Natural Science Foundation of China (81301348) and China Postdoctoral Science Foundation (2015M570154)
Studies on image compression and image reconstruction
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
DCT and DST Based Image Compression for 3D Reconstruction
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-03-01
This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.
Sparse Image Reconstruction in Computed Tomography
Jørgensen, Jakob Sauer
In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... and limitations of sparse reconstruction methods in CT, in particular in a quantitative sense. For example, relations between image properties such as contrast, structure and sparsity, tolerable noise levels, suficient sampling levels, the choice of sparse reconstruction formulation and the achievable image...
Sheng, Qiwei; Matthews, Thomas P; Xia, Jun; Zhu, Liren; Wang, Lihong V; Anastasio, Mark A
2015-01-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. When the imaging system employs conventional piezoelectric ultrasonic transducers, the ideal photoacoustic (PA) signals are degraded by the transducers' acousto-electric impulse responses (EIRs) during the measurement process. If unaccounted for, this can degrade the accuracy of the reconstructed image. In principle, the effect of the EIRs on the measured PA signals can be ameliorated via deconvolution; images can be reconstructed subsequently by application of a reconstruction method that assumes an idealized EIR. Alternatively, the effect of the EIR can be incorporated into an imaging model and implicitly compensated for during reconstruction. In either case, the efficacy of the correction can be limited by errors in the assumed EIRs. In this work, a joint optimization approach to PACT image r...
IIR GRAPPA for parallel MR image reconstruction.
Chen, Zhaolin; Zhang, Jingxin; Yang, Ran; Kellman, Peter; Johnston, Leigh A; Egan, Gary F
2010-02-01
Accelerated parallel MRI has advantage in imaging speed, and its image quality has been improved continuously in recent years. This paper introduces a two-dimensional infinite impulse response model of inverse filter to replace the finite impulse response model currently used in generalized autocalibrating partially parallel acquisitions class image reconstruction methods. The infinite impulse response model better characterizes the correlation of k-space data points and better approximates the perfect inversion of parallel imaging process, resulting in a novel generalized image reconstruction method for accelerated parallel MRI. This k-space-based reconstruction method includes the conventional generalized autocalibrating partially parallel acquisitions class methods as special cases and has a new infinite impulse response data estimation mechanism for effective improvement of image quality. The experiments on in vivo MRI data show that the proposed method significantly reduces reconstruction errors compared with the conventional two-dimensional generalized autocalibrating partially parallel acquisitions method, particularly at the high acceleration rates.
Analytic image concept combined to SENSE reconstruction
Yankam Njiwa, J; Baltes, C.; Rudin, M.
2011-01-01
Two approaches of reconstructing undersampled partial k-space data, acquired with multiple coils are compared: homodyne detection combined with SENSE (HM_SENSE) and analytic image reconstruction combined with SENSE (AI_SENSE). The latter overcomes limitations of HM_ SENSE by considering aliased images as analytic thus avoiding the need for phase correction required for HM_SENSE. MATERIALS AND METHODS: In vivo imaging experiments were carried out in male Lewis rats using both gradient echo...
Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.
Mastanduno, Michael A; Gambhir, Sanjiv S
2016-10-01
Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.
Tianyun Wang
2014-03-01
Full Text Available In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis.
Accurate particle position measurement from images
Feng, Yan; Liu, Bin; 10.1063/1.2735920
2011-01-01
The moment method is an image analysis technique for sub-pixel estimation of particle positions. The total error in the calculated particle position includes effects of pixel locking and random noise in each pixel. Pixel locking, also known as peak locking, is an artifact where calculated particle positions are concentrated at certain locations relative to pixel edges. We report simulations to gain an understanding of the sources of error and their dependence on parameters the experimenter can control. We suggest an algorithm, and we find optimal parameters an experimenter can use to minimize total error and pixel locking. Simulating a dusty plasma experiment, we find that a sub-pixel accuracy of 0.017 pixel or better can be attained. These results are also useful for improving particle position measurement and particle tracking velocimetry (PTV) using video microscopy, in fields including colloids, biology, and fluid mechanics.
Thermographic image reconstruction using ultrasound reconstruction from virtual waves
Burgholzer, Peter; Gruber, Jürgen; Mayr, Günther
2016-01-01
Reconstruction of subsurface features from ultrasound signals measured on the surface is widely used in medicine and non-destructive testing. In this work, we introduce a concept how to use image reconstruction methods known from ultrasonic imaging for thermographic signals, i.e. on the measured temperature evolution on a sample surface. Before using these imaging methods a virtual signal is calculated by applying a transformation to the measured temperature evolution. The virtual signal is calculated locally for every detection point and has the same initial temperature distribution as the measured signal, but is a solution of the wave equation. The introduced transformation can be used for every shape of the detection surface and in every dimension. It describes all the irreversibility of the heat diffusion, which is responsible that the spatial resolution gets worse with increasing depth. Up to now, for thermographic imaging mostly one-dimensional methods, e.g., for depth-profiling were used, which are sui...
4D image reconstruction for emission tomography
Reader, Andrew J.; Verhaeghe, Jeroen
2014-11-01
An overview of the theory of 4D image reconstruction for emission tomography is given along with a review of the current state of the art, covering both positron emission tomography and single photon emission computed tomography (SPECT). By viewing 4D image reconstruction as a matter of either linear or non-linear parameter estimation for a set of spatiotemporal functions chosen to approximately represent the radiotracer distribution, the areas of so-called ‘fully 4D’ image reconstruction and ‘direct kinetic parameter estimation’ are unified within a common framework. Many choices of linear and non-linear parameterization of these functions are considered (including the important case where the parameters have direct biological meaning), along with a review of the algorithms which are able to estimate these often non-linear parameters from emission tomography data. The other crucial components to image reconstruction (the objective function, the system model and the raw data format) are also covered, but in less detail due to the relatively straightforward extension from their corresponding components in conventional 3D image reconstruction. The key unifying concept is that maximum likelihood or maximum a posteriori (MAP) estimation of either linear or non-linear model parameters can be achieved in image space after carrying out a conventional expectation maximization (EM) update of the dynamic image series, using a Kullback-Leibler distance metric (comparing the modeled image values with the EM image values), to optimize the desired parameters. For MAP, an image-space penalty for regularization purposes is required. The benefits of 4D and direct reconstruction reported in the literature are reviewed, and furthermore demonstrated with simple simulation examples. It is clear that the future of reconstructing dynamic or functional emission tomography images, which often exhibit high levels of spatially correlated noise, should ideally exploit these 4D
Reconstruction of Undersampled Atomic Force Microscopy Images
Jensen, Tobias Lindstrøm; Arildsen, Thomas; Østergaard, Jan
2013-01-01
. Moreover, it is often required to take several images before a relevant observation region is identified. In this paper we show how to significantly reduce the image acquisition time by undersampling. The reconstruction of an undersampled AFM image can be viewed as an inpainting, interpolating problem...
Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.
2017-03-01
Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.
Scattering Correction For Image Reconstruction In Flash Radiography
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)
2013-08-15
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.
Image reconstruction for robot assisted ultrasound tomography
Aalamifar, Fereshteh; Zhang, Haichong K.; Rahmim, Arman; Boctor, Emad M.
2016-04-01
An investigation of several image reconstruction methods for robot-assisted ultrasound (US) tomography setup is presented. In the robot-assisted setup, an expert moves the US probe to the location of interest, and a robotic arm automatically aligns another US probe with it. The two aligned probes can then transmit and receive US signals which are subsequently used for tomographic reconstruction. This study focuses on reconstruction of the speed of sound. In various simulation evaluations as well as in an experiment with a millimeter-range inaccuracy, we demonstrate that the limited data provided by two probes can be used to reconstruct pixel-wise images differentiating between media with different speeds of sound. Combining the results of this investigation with the developed robot-assisted US tomography setup, we envision feasibility of this setup for tomographic imaging in applications beyond breast imaging, with potentially significant efficacy in cancer diagnosis.
Li, Hechao; Kaira, Shashank; Mertens, James; Chawla, Nikhilesh; Jiao, Yang
2016-12-01
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for its performance prediction, prognosis and optimization. X-ray tomography has provided a nondestructive means for microstructure characterization in 3D and 4D (i.e. structural evolution over time), in which a material is typically reconstructed from a large number of tomographic projections using filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART). Here, we present in detail a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of absorption contrast x-ray tomographic projections. This discrete tomography reconstruction procedure is in contrast to the commonly used FBP and ART, which usually requires thousands of projections for accurate microstructure rendition. The utility of our stochastic procedure is first demonstrated by reconstructing a wide class of two-phase heterogeneous materials including sandstone and hard-particle packing from simulated limited-angle projections in both cone-beam and parallel beam projection geometry. It is then applied to reconstruct tailored Sn-sphere-clay-matrix systems from limited-angle cone-beam data obtained via a lab-scale tomography facility at Arizona State University and parallel-beam synchrotron data obtained at Advanced Photon Source, Argonne National Laboratory. In addition, we examine the information content of tomography data by successively incorporating larger number of projections and quantifying the accuracy of the reconstructions. We show that only a small number of projections (e.g. 20-40, depending on the complexity of the microstructure of interest and desired resolution) are necessary for accurate material reconstructions via our stochastic procedure, which indicates its high efficiency in using limited structural information. The ramifications of the stochastic reconstruction procedure in 4D materials science are also
Image Reconstruction for Prostate Specific Nuclear Medicine imagers
Mark Smith
2007-01-11
There is increasing interest in the design and construction of nuclear medicine detectors for dedicated prostate imaging. These include detectors designed for imaging the biodistribution of radiopharmaceuticals labeled with single gamma as well as positron-emitting radionuclides. New detectors and acquisition geometries present challenges and opportunities for image reconstruction. In this contribution various strategies for image reconstruction for these special purpose imagers are reviewed. Iterative statistical algorithms provide a framework for reconstructing prostate images from a wide variety of detectors and acquisition geometries for PET and SPECT. The key to their success is modeling the physics of photon transport and data acquisition and the Poisson statistics of nuclear decay. Analytic image reconstruction methods can be fast and are useful for favorable acquisition geometries. Future perspectives on algorithm development and data analysis for prostate imaging are presented.
3D Reconstruction of NMR Images
Peter Izak
2007-01-01
Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
wang, Kun; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By ...
Bayesian image reconstruction: Application to emission tomography
Nunez, J.; Llacer, J.
1989-02-01
In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.
Image reconstruction from incomplete convolution data via total variation regularization
Zhida Shen
2015-02-01
Full Text Available Variational models with Total Variation (TV regularization have long been known to preserve image edges and produce high quality reconstruction. On the other hand, recent theory on compressive sensing has shown that it is feasible to accurately reconstruct images from a few linear measurements via TV regularization. However, in general TV models are difficult to solve due to the nondifferentiability and the universal coupling of variables. In this paper, we propose the use of alternating direction method for image reconstruction from highly incomplete convolution data, where an image is reconstructed as a minimizer of an energy function that sums a TV term for image regularity and a least squares term for data fitting. Our algorithm, called RecPK, takes advantage of problem structures and has an extremely low per-iteration cost. To demonstrate the efficiency of RecPK, we compare it with TwIST, a state-of-the-art algorithm for minimizing TV models. Moreover, we also demonstrate the usefulness of RecPK in image zooming.
Image reconstruction under non-Gaussian noise
Sciacchitano, Federica
During acquisition and transmission, images are often blurred and corrupted by noise. One of the fundamental tasks of image processing is to reconstruct the clean image from a degraded version. The process of recovering the original image from the data is an example of inverse problem. Due......D thesis intends to solve some of the many open questions for image restoration under non-Gaussian noise. The two main kinds of noise studied in this PhD project are the impulse noise and the Cauchy noise. Impulse noise is due to for instance the malfunctioning pixel elements in the camera sensors, errors...... that the CM estimate outperforms the MAP estimate, when the error depends on Bregman distances. This PhD project can have many applications in the modern society, in fact the reconstruction of high quality images with less noise and more details enhances the image processing operations, such as edge detection...
Elasticity reconstructive imaging by means of stimulated echo MRI.
Chenevert, T L; Skovoroda, A R; O'Donnell, M; Emelianov, S Y
1998-03-01
A method is introduced to measure internal mechanical displacement and strain by means of MRI. Such measurements are needed to reconstruct an image of the elastic Young's modulus. A stimulated echo acquisition sequence with additional gradient pulses encodes internal displacements in response to an externally applied differential deformation. The sequence provides an accurate measure of static displacement by limiting the mechanical transitions to the mixing period of the simulated echo. Elasticity reconstruction involves definition of a region of interest having uniform Young's modulus along its boundary and subsequent solution of the discretized elasticity equilibrium equations. Data acquisition and reconstruction were performed on a urethane rubber phantom of known elastic properties and an ex vivo canine kidney phantom using elastic properties are well represented on Young's modulus images. The long-term objective of this work is to provide a means for remote palpation and elasticity quantitation in deep tissues otherwise inaccessible to manual palpation.
Reconstruction Algorithms in Undersampled AFM Imaging
Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen
2016-01-01
This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the s...
Tomographic image reconstruction from continuous projections
Cant, J.; Palenstijn, W.J.; Behiels, G.; Sijbers, J.
2014-01-01
An important design aspect in tomographic image reconstruction is the choice between a step-and-shoot protocol versus continuous X-ray tube movement for image acquisition. A step-and-shoot protocol implies a perfectly still tube during X-ray exposure, and hence involves moving the tube to its next p
Iterative Reconstruction for Differential Phase Contrast Imaging
Koehler, T.; Brendel, B.; Roessl, E.
2011-01-01
Purpose: The purpose of this work is to combine two areas of active research in tomographic x-ray imaging. The first one is the use of iterative reconstruction techniques. The second one is differential phase contrast imaging (DPCI). Method: We derive an SPS type maximum likelihood (ML) reconstructi
Monte-Carlo simulations and image reconstruction for novel imaging scenarios in emission tomography
Gillam, John E. [The University of Sydney, Faculty of Health Sciences and The Brain and Mind Centre, Camperdown (Australia); Rafecas, Magdalena, E-mail: rafecas@imt.uni-luebeck.de [University of Lubeck, Institute of Medical Engineering, Ratzeburger Allee 160, 23538 Lübeck (Germany)
2016-02-11
Emission imaging incorporates both the development of dedicated devices for data acquisition as well as algorithms for recovering images from that data. Emission tomography is an indirect approach to imaging. The effect of device modification on the final image can be understood through both the way in which data are gathered, using simulation, and the way in which the image is formed from that data, or image reconstruction. When developing novel devices, systems and imaging tasks, accurate simulation and image reconstruction allow performance to be estimated, and in some cases optimized, using computational methods before or during the process of physical construction. However, there are a vast range of approaches, algorithms and pre-existing computational tools that can be exploited and the choices made will affect the accuracy of the in silico results and quality of the reconstructed images. On the one hand, should important physical effects be neglected in either the simulation or reconstruction steps, specific enhancements provided by novel devices may not be represented in the results. On the other hand, over-modeling of device characteristics in either step leads to large computational overheads that can confound timely results. Here, a range of simulation methodologies and toolkits are discussed, as well as reconstruction algorithms that may be employed in emission imaging. The relative advantages and disadvantages of a range of options are highlighted using specific examples from current research scenarios.
Focusing criterion in DHM image reconstruction
Mihailescu, M.; Mihale, N.; Popescu, R. C.; Acasandrei, A.; Paun, I. A.; Dinescu, M.; Scarlat, E.
2015-02-01
This study is presenting the theoretical approach and the practical results of a precise activity involved in the hologram reconstruction in order to find the optimally focused image of MG63 osteoblast-like cells cultivated on polymeric flat substrates. The morphology and dynamic of the cell is investigated by digital holographic microscopy (DHM) technique. The reconstruction is digitally performed using an algorithm based on the scalar theory of diffraction in the Fresnel approximation. The quality of the 3D images of the cells is crucially depending on the focusing capability of the reconstruction chain to fit the parameters of the optical recorder, particularly the focusing value. Our proposal to find the focused image is based on the images decomposition on gray levels and their histogram analysis. More precisely the focusing criterion is based on the evaluation of the form of this distribution.
Techniques in Iterative Proton CT Image Reconstruction
Penfold, Scott
2015-01-01
This is a review paper on some of the physics, modeling, and iterative algorithms in proton computed tomography (pCT) image reconstruction. The primary challenge in pCT image reconstruction lies in the degraded spatial resolution resulting from multiple Coulomb scattering within the imaged object. Analytical models such as the most likely path (MLP) have been proposed to predict the scattered trajectory from measurements of individual proton location and direction before and after the object. Iterative algorithms provide a flexible tool with which to incorporate these models into image reconstruction. The modeling leads to a large and sparse linear system of equations that can efficiently be solved by projection methods-based iterative algorithms. Such algorithms perform projections of the iterates onto the hyperlanes that are represented by the linear equations of the system. They perform these projections in possibly various algorithmic structures, such as block-iterative projections (BIP), string-averaging...
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Bayesian Image Reconstruction Based on Voronoi Diagrams
Cabrera, G F; Hitschfeld, N
2007-01-01
We present a Bayesian Voronoi image reconstruction technique (VIR) for interferometric data. Bayesian analysis applied to the inverse problem allows us to derive the a-posteriori probability of a novel parameterization of interferometric images. We use a variable Voronoi diagram as our model in place of the usual fixed pixel grid. A quantization of the intensity field allows us to calculate the likelihood function and a-priori probabilities. The Voronoi image is optimized including the number of polygons as free parameters. We apply our algorithm to deconvolve simulated interferometric data. Residuals, restored images and chi^2 values are used to compare our reconstructions with fixed grid models. VIR has the advantage of modeling the image with few parameters, obtaining a better image from a Bayesian point of view.
Proton computed tomography images with algebraic reconstruction
Bruzzi, M.; Civinini, C.; Scaringella, M.; Bonanno, D.; Brianzi, M.; Carpinelli, M.; Cirrone, G. A. P.; Cuttone, G.; Presti, D. Lo; Maccioni, G.; Pallotta, S.; Randazzo, N.; Romano, F.; Sipala, V.; Talamonti, C.; Vanzi, E.
2017-02-01
A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to 1% and spatial resolutions CT in hadron-therapy.
Terahertz Imaging for Biomedical Applications Pattern Recognition and Tomographic Reconstruction
Yin, Xiaoxia; Abbott, Derek
2012-01-01
Terahertz Imaging for Biomedical Applications: Pattern Recognition and Tomographic Reconstruction presents the necessary algorithms needed to assist screening, diagnosis, and treatment, and these algorithms will play a critical role in the accurate detection of abnormalities present in biomedical imaging. Terahertz biomedical imaging has become an area of interest due to its ability to simultaneously acquire both image and spectral information. Terahertz imaging systems are being commercialized with an increasing number of trials performed in a biomedical setting. Terahertz tomographic imaging and detection technology contributes to the ability to identify opaque objects with clear boundaries,and would be useful to both in vivo and ex vivo environments. This book also: Introduces terahertz radiation techniques and provides a number of topical examples of signal and image processing, as well as machine learning Presents the most recent developments in an emerging field, terahertz radiation Utilizes new methods...
Superresolution images reconstructed from aliased images
Vandewalle, Patrick; Susstrunk, Sabine E.; Vetterli, Martin
2003-06-01
In this paper, we present a simple method to almost quadruple the spatial resolution of aliased images. From a set of four low resolution, undersampled and shifted images, a new image is constructed with almost twice the resolution in each dimension. The resulting image is aliasing-free. A small aliasing-free part of the frequency domain of the images is used to compute the exact subpixel shifts. When the relative image positions are known, a higher resolution image can be constructed using the Papoulis-Gerchberg algorithm. The proposed method is tested in a simulation where all simulation parameters are well controlled, and where the resulting image can be compared with its original. The algorithm is also applied to real, noisy images from a digital camera. Both experiments show very good results.
3D Lunar Terrain Reconstruction from Apollo Images
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Fu, Jian; Tan, Renbo; Chen, Liyuan
2014-01-01
X-ray differential phase-contrast computed tomography (DPC-CT) is a powerful physical and biochemical analysis tool. In practical applications, there are often challenges for DPC-CT due to insufficient data caused by few-view, bad or missing detector channels, or limited scanning angular range. They occur quite frequently because of experimental constraints from imaging hardware, scanning geometry, and the exposure dose delivered to living specimens. In this work, we analyze the influence of incomplete data on DPC-CT image reconstruction. Then, a reconstruction method is developed and investigated for incomplete data DPC-CT. It is based on an algebraic iteration reconstruction technique, which minimizes the image total variation and permits accurate tomographic imaging with less data. This work comprises a numerical study of the method and its experimental verification using a dataset measured at the W2 beamline of the storage ring DORIS III equipped with a Talbot-Lau interferometer. The numerical and experimental results demonstrate that the presented method can handle incomplete data. It will be of interest for a wide range of DPC-CT applications in medicine, biology, and nondestructive testing.
3-D Reconstruction From Satellite Images
Denver, Troelz
1999-01-01
The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping o......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....
Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A
2013-02-01
Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.
Diwakar, Mithun; Tal, Omer; Liu, Thomas T; Harrington, Deborah L; Srinivasan, Ramesh; Muzzatti, Laura; Song, Tao; Theilmann, Rebecca J; Lee, Roland R; Huang, Ming-Xiong
2011-06-15
Beamformer spatial filters are commonly used to explore the active neuronal sources underlying magnetoencephalography (MEG) recordings at low signal-to-noise ratio (SNR). Conventional beamformer techniques are successful in localizing uncorrelated neuronal sources under poor SNR conditions. However, the spatial and temporal features from conventional beamformer reconstructions suffer when sources are correlated, which is a common and important property of real neuronal networks. Dual-beamformer techniques, originally developed by Brookes et al. to deal with this limitation, successfully localize highly-correlated sources and determine their orientations and weightings, but their performance degrades at low correlations. They also lack the capability to produce individual time courses and therefore cannot quantify source correlation. In this paper, we present an enhanced formulation of our earlier dual-core beamformer (DCBF) approach that reconstructs individual source time courses and their correlations. Through computer simulations, we show that the enhanced DCBF (eDCBF) consistently and accurately models dual-source activity regardless of the correlation strength. Simulations also show that a multi-core extension of eDCBF effectively handles the presence of additional correlated sources. In a human auditory task, we further demonstrate that eDCBF accurately reconstructs left and right auditory temporal responses and their correlations. Spatial resolution and source localization strategies corresponding to different measures within the eDCBF framework are also discussed. In summary, eDCBF accurately reconstructs source spatio-temporal behavior, providing a means for characterizing complex neuronal networks and their communication.
Image Reconstruction Image reconstruction by using local inverse for full field of view
Yang, Kang; Yang, Xintie; Zhao, Shuang-Ren
2015-01-01
The iterative refinement method (IRM) has been very successfully applied in many different fields for examples the modern quantum chemical calculation and CT image reconstruction. It is proved that the refinement method can create an exact inverse from an approximate inverse with a few iterations. The IRM has been used in CT image reconstruction to lower the radiation dose. The IRM utilize the errors between the original measured data and the recalculated data to correct the reconstructed images. However if it is not smooth inside the object, there often is an over-correction along the boundary of the organs in the reconstructed images. The over-correction increase the noises especially on the edges inside the image. One solution to reduce the above mentioned noises is using some kind of filters. Filtering the noise before/after/between the image reconstruction processing. However filtering the noises also means reduce the resolution of the reconstructed images. The filtered image is often applied to the imag...
Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca [Département de physique, de génie physique et d’optique et Centre de recherche sur le cancer de l’Université Laval, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie et Axe Oncologie du Centre de recherche du CHU de Québec, CHU de Québec, 11 Côte du Palais, Québec, Québec G1R 2J6 (Canada); Binnekamp, Dirk [Integrated Clinical Solutions and Marketing, Philips Healthcare, Veenpluis 4-6, Best 5680 DA (Netherlands)
2015-03-15
Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.
Approximate Sparsity and Nonlocal Total Variation Based Compressive MR Image Reconstruction
Chengzhi Deng
2014-01-01
Full Text Available Recent developments in compressive sensing (CS show that it is possible to accurately reconstruct the magnetic resonance (MR image from undersampled k-space data by solving nonsmooth convex optimization problems, which therefore significantly reduce the scanning time. In this paper, we propose a new MR image reconstruction method based on a compound regularization model associated with the nonlocal total variation (NLTV and the wavelet approximate sparsity. Nonlocal total variation can restore periodic textures and local geometric information better than total variation. The wavelet approximate sparsity achieves more accurate sparse reconstruction than fixed wavelet l0 and l1 norm. Furthermore, a variable splitting and augmented Lagrangian algorithm is presented to solve the proposed minimization problem. Experimental results on MR image reconstruction demonstrate that the proposed method outperforms many existing MR image reconstruction methods both in quantitative and in visual quality assessment.
Anthropomorphic image reconstruction via hypoelliptic diffusion
Boscain, Ugo; Gauthier, Jean-Paul; Rossi, Francesco
2010-01-01
In this paper we present a model of geometry of vision which generalizes one due to Petitot, Citti and Sarti. One of its main features is that the primary visual cortex V1 lifts the image from $R^2$ to the bundle of directions of the plane $PTR^2=R^2\\times P^1$. Neurons are grouped into orientation columns, each of them corresponding to a point of the bundle $PTR^2$. In this model a corrupted image is reconstructed by minimizing the energy necessary for the activation of the orientation columns corresponding to regions in which the image is corrupted. The minimization process gives rise to an hypoelliptic heat equation on $PTR^2$. The hypoelliptic heat equation is studied using the generalized Fourier transform. It transforms the hypoelliptic equation into a 1-d heat equation with Mathieu potential, which one can solve numerically. Preliminary examples of image reconstruction are hereby provided.
Stochastic image reconstruction for a dual-particle imaging system
Hamel, M.C., E-mail: mchamel@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States); Polack, J.K., E-mail: kpolack@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States); Poitrasson-Rivière, A., E-mail: alexispr@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States); Flaska, M., E-mail: mflaska@psu.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States); Department of Mechanical and Nuclear Engineering, Pennsylvania State University, 137 Reber Building, University Park, PA 16802 (United States); Clarke, S.D., E-mail: clarkesd@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States); Pozzi, S.A., E-mail: pozzisa@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States); Tomanin, A., E-mail: alice.tomanin@jrc.ec.europa.eu [European Commission, Joint Research Centre, Institute for Transuranium Elements, 21027 Ispra, VA (Italy); Lainsa-Italia S.R.L., via E. Fermi 2749, 21027 Ispra, VA (Italy); Peerani, P., E-mail: paolo.peerani@jrc.ec.europa.eu [European Commission, Joint Research Centre, Institute for Transuranium Elements, 21027 Ispra, VA (Italy)
2016-02-21
Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a {sup 252}Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.
Image reconstruction for brain CT slices
吴建明; 施鹏飞
2004-01-01
Different modalities in biomedical images, like CT, MRI and PET scanners, provide detailed cross-sectional views of human anatomy. This paper introduces three-dimensional brain reconstruction based on CT slices. It contains filtering, fuzzy segmentation, matching method of contours, cell array structure and image animation. Experimental results have shown its validity. The innovation is matching method of contours and fuzzy segmentation algorithm of CT slices.
Implementation of efficient image reconstruction for CT
Jie Liu; Guangfei Wang
2005-01-01
@@ The operational procedures for efficiently reconstructing the two-dimensional image of a body by the filtered back projection are described in this paper. The projections are interpolated for four times of original projection by zero-padding the original projection in frequency-domain and then inverse fast Fourier transform (FFT) is taken to improve accuracy.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A.; Anastasio, Mark A.
2012-09-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By the use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications.
Mirror Surface Reconstruction from a Single Image.
Liu, Miaomiao; Hartley, Richard; Salzmann, Mathieu
2015-04-01
This paper tackles the problem of reconstructing the shape of a smooth mirror surface from a single image. In particular, we consider the case where the camera is observing the reflection of a static reference target in the unknown mirror. We first study the reconstruction problem given dense correspondences between 3D points on the reference target and image locations. In such conditions, our differential geometry analysis provides a theoretical proof that the shape of the mirror surface can be recovered if the pose of the reference target is known. We then relax our assumptions by considering the case where only sparse correspondences are available. In this scenario, we formulate reconstruction as an optimization problem, which can be solved using a nonlinear least-squares method. We demonstrate the effectiveness of our method on both synthetic and real images. We then provide a theoretical analysis of the potential degenerate cases with and without prior knowledge of the pose of the reference target. Finally we show that our theory can be similarly applied to the reconstruction of the surface of transparent object.
A fast and accurate method for echocardiography strain rate imaging
Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh
2009-02-01
Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.
Reconstructing light curves from HXMT imaging observations
Huo, Zhuo-Xi; Li, Yi-Ming; Zhou, Jian-Feng
2014-01-01
The Hard X-ray Modulation Telescope (HXMT) is a Chinese space telescope mission. It is scheduled for launch in 2015. The telescope will perform an all-sky survey in hard X-ray band (1 - 250 keV), a series of deep imaging observations of small sky regions as well as pointed observations. In this work we present a conceptual method to reconstruct light curves from HXMT imaging observation directly, in order to monitor time-varying objects such as GRB, AXP and SGR in hard X-ray band with HXMT imaging observations.
Sparse image reconstruction for molecular imaging
Ting, Michael; Hero, Alfred O
2008-01-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at sub-atomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. The paper therefore does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing t...
Jørgensen, Jakob H; Pan, Xiaochuan
2011-01-01
Discrete-to-discrete imaging models for computed tomography (CT) are becoming increasingly ubiquitous as the interest in iterative image reconstruction algorithms has heightened. Despite this trend, all the intuition for algorithm and system design derives from analysis of continuous-to-continuous models such as the X-ray and Radon transform. While the similarity between these models justifies some crossover, questions such as what are sufficient sampling conditions can be quite different for the two models. This sampling issue is addressed extensively in the first half of the article using singular value decomposition analysis for determining sufficient number of views and detector bins. The question of full sampling for CT is particularly relevant to current attempts to adapt compressive sensing (CS) motivated methods to application in CT image reconstruction. The second half goes in depth on this subject and discusses the link between object sparsity and sufficient sampling for accurate reconstruction. Par...
Optimized 3D Street Scene Reconstruction from Driving Recorder Images
Yongjun Zhang
2015-07-01
Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.
Calibration of time-of-flight cameras for accurate intraoperative surface reconstruction
Mersmann, Sven; Seitel, Alexander; Maier-Hein, Lena [Division of Medical and Biological Informatics, Junior Group Computer-assisted Interventions, German Cancer Research Center (DKFZ), Heidelberg, Baden-Wurttemberg 69120 (Germany); Erz, Michael; Jähne, Bernd [Heidelberg Collaboratory for Image Processing (HCI), University of Heidelberg, Baden-Wurttemberg 69115 (Germany); Nickel, Felix; Mieth, Markus; Mehrabi, Arianeb [Department of General, Visceral and Transplant Surgery, University of Heidelberg, Baden-Wurttemberg 69120 (Germany)
2013-08-15
Purpose: In image-guided surgery (IGS) intraoperative image acquisition of tissue shape, motion, and morphology is one of the main challenges. Recently, time-of-flight (ToF) cameras have emerged as a new means for fast range image acquisition that can be used for multimodal registration of the patient anatomy during surgery. The major drawbacks of ToF cameras are systematic errors in the image acquisition technique that compromise the quality of the measured range images. In this paper, we propose a calibration concept that, for the first time, accounts for all known systematic errors affecting the quality of ToF range images. Laboratory and in vitro experiments assess its performance in the context of IGS.Methods: For calibration the camera-related error sources depending on the sensor, the sensor temperature and the set integration time are corrected first, followed by the scene-specific errors, which are modeled as function of the measured distance, the amplitude and the radial distance to the principal point of the camera. Accounting for the high accuracy demands in IGS, we use a custom-made calibration device to provide reference distance data, the cameras are calibrated too. To evaluate the mitigation of the error, the remaining residual error after ToF depth calibration was compared with that arising from using the manufacturer routines for several state-of-the-art ToF cameras. The accuracy of reconstructed ToF surfaces was investigated after multimodal registration with computed tomography (CT) data of liver models by assessment of the target registration error (TRE) of markers introduced in the livers.Results: For the inspected distance range of up to 2 m, our calibration approach yielded a mean residual error to reference data ranging from 1.5 ± 4.3 mm for the best camera to 7.2 ± 11.0 mm. When compared to the data obtained from the manufacturer routines, the residual error was reduced by at least 78% from worst calibration result to most accurate
3D reconstruction, visualization, and measurement of MRI images
Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap
1999-03-01
This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-03-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears.
Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)
1997-04-01
It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.
Molecular Histopathology by Spectrally Reconstructed Nonlinear Interferometric Vibrational Imaging
Chowdary, Praveen D.; Jiang, Zhi; Chaney, Eric J.; Benalcazar, Wladimir A.; Marks, Daniel L.; Gruebele, Martin; Boppart, Stephen A.
2011-01-01
Sensitive assays for rapid quantitative analysis of histologic sections, resected tissue specimens, or in situ tissue are highly desired for early disease diagnosis. Stained histopathology is the gold standard but remains a subjective practice on processed tissue taking from hours to days. We describe a microscopy technique that obtains a sensitive and accurate color-coded image from intrinsic molecular markers. Spectrally reconstructed nonlinear interferometric vibrational imaging can differentiate cancer versus normal tissue sections with greater than 99% confidence interval in a preclinical rat breast cancer model and define cancer boundaries to ±100 μm with greater than 99% confidence interval, using fresh unstained tissue sections imaged in less than 5 minutes. By optimizing optical sources and beam delivery, this technique can potentially enable real-time point-of-care optical molecular imaging and diagnosis. PMID:21098699
Performance-based assessment of reconstructed images
Hanson, Kenneth [Los Alamos National Laboratory
2009-01-01
During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.
Probabilistic image reconstruction for radio interferometers
Sutter, P M; McEwen, Jason D; Bunn, Emory F; Karakci, Ata; Korotkov, Andrei; Timbie, Peter; Tucker, Gregory S; Zhang, Le
2013-01-01
We present a novel, general-purpose method for deconvolving and denoising images from gridded radio interferometric visibilities using Bayesian inference based on a Gaussian process model. The method automatically takes into account incomplete coverage of the uv-plane and mode coupling due to the beam. Our method uses Gibbs sampling to efficiently explore the full posterior distribution of the underlying signal image given the data. We use a set of widely diverse mock images with a realistic interferometer setup and level of noise to assess the method. Compared to results from a proxy for the CLEAN method we find that in terms of RMS error and signal-to-noise ratio our approach performs better than traditional deconvolution techniques, regardless of the structure of the source image in our test suite. Our implementation scales as O(np log np), provides full statistical and uncertainty information of the reconstructed image, requires no supervision, and provides a robust, consistent framework for incorporating...
Reconstructing building mass models from UAV images
Li, Minglei
2015-07-26
We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.
Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation
Marc Pierrot-Deseilligny
2011-12-01
Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..
Image-reconstruction methods in positron tomography
Townsend, David W; CERN. Geneva
1993-01-01
Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...
Building Reconstruction Using DSM and Orthorectified Images
Peter Reinartz
2013-04-01
Full Text Available High resolution Digital Surface Models (DSMs produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2 according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements.
An FBP image reconstruction algorithm for x-ray differential phase contrast CT
Qi, Zhihua; Chen, Guang-Hong
2008-03-01
Most recently, a novel data acquisition method has been proposed and experimentally implemented for x-ray differential phase contrast computed tomography (DPC-CT), in which a conventional x-ray tube and a Talbot-Lau type interferometer were utilized in data acquisition. The divergent nature of the data acquisition system requires a divergent-beam image reconstruction algorithm for DPC-CT. This paper focuses on addressing this image reconstruction issue. We developed a filtered backprojection algorithm to directly reconstruct the DPC-CT images from acquired projection data. The developed algorithm allows one to directly reconstruct the decrement of the real part of the refractive index from the measured data. In order to accurately reconstruct an image, the data need to be acquired over an angular range of at least 180° plus the fan-angle. Different from the parallel beam data acquisition and reconstruction methods, a 180° rotation angle for data acquisition system does not provide sufficient data for an accurate reconstruction of the entire field of view. Numerical simulations have been conducted to validate the image reconstruction algorithm.
Roelandts, T.; Batenburg, K.J.; Dekker, A.J. den; Sijbers, J.
2014-01-01
In this paper, we present the reconstructed residual error, which evaluates the quality of a given segmentation of a reconstructed image in tomography. This novel evaluation method, which is independent of the methods that were used to reconstruct and segment the image, is applicable to segmentation
Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming
2016-12-01
Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).
A new Level-set based Protocol for Accurate Bone Segmentation from CT Imaging
Pinheiro, Manuel
2015-01-01
In this work it is proposed a medical image segmentation pipeline for accurate bone segmentation from CT imaging. It is a two-step methodology, with a pre-segmentation step and a segmentation refinement step. First, the user performs a rough segmenting of the desired region of interest. Next, a fully automatic refinement step is applied to the pre-segmented data. The automatic segmentation refinement is composed by several sub-stpng, namely image deconvolution, image cropping and interpolation. The user-defined pre-segmentation is then refined over the deconvolved, cropped, and up-sampled version of the image. The algorithm is applied in the segmentation of CT images of a composite femur bone, reconstructed with different reconstruction protocols. Segmentation outcomes are validated against a gold standard model obtained with coordinate measuring machine Nikon Metris LK V20 with a digital line scanner LC60-D that guarantees an accuracy of 28 $\\mu m$. High sub-pixel accuracy models were obtained for all tested...
Analysis of Galileo Style Geostationary Satellite Imaging: Image Reconstruction
2012-09-01
obtained using only baselines longer than 8 m does not sample the short spacial frequencies, and the image reconstruction is not able to recover the...the long spacial frequencies sampled in a shorter baseline overlap the short spacial frequencies sampled in a longer baseline. This technique will
2016-01-01
Compressive Sensing (CS) theory has great potential for reconstructing Computed Tomography (CT) images from sparse-views projection data and Total Variation- (TV-) based CT reconstruction method is very popular. However, it does not directly incorporate prior images into the reconstruction. To improve the quality of reconstructed images, this paper proposed an improved TV minimization method using prior images and Split-Bregman method in CT reconstruction, which uses prior images to obtain valuable previous information and promote the subsequent imaging process. The images obtained asynchronously were registered via Locally Linear Embedding (LLE). To validate the method, two studies were performed. Numerical simulation using an abdomen phantom has been used to demonstrate that the proposed method enables accurate reconstruction of image objects under sparse projection data. A real dataset was used to further validate the method. PMID:27689076
Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data.
Amat, Fernando; Lemon, William; Mossing, Daniel P; McDole, Katie; Wan, Yinan; Branson, Kristin; Myers, Eugene W; Keller, Philipp J
2014-09-01
The comprehensive reconstruction of cell lineages in complex multicellular organisms is a central goal of developmental biology. We present an open-source computational framework for the segmentation and tracking of cell nuclei with high accuracy and speed. We demonstrate its (i) generality by reconstructing cell lineages in four-dimensional, terabyte-sized image data sets of fruit fly, zebrafish and mouse embryos acquired with three types of fluorescence microscopes, (ii) scalability by analyzing advanced stages of development with up to 20,000 cells per time point at 26,000 cells min(-1) on a single computer workstation and (iii) ease of use by adjusting only two parameters across all data sets and providing visualization and editing tools for efficient data curation. Our approach achieves on average 97.0% linkage accuracy across all species and imaging modalities. Using our system, we performed the first cell lineage reconstruction of early Drosophila melanogaster nervous system development, revealing neuroblast dynamics throughout an entire embryo.
Fast contrast enhanced imaging with projection reconstruction
Peters, Dana Ceceilia
The use of contrast agents has lead to great advances in magnetic resonance angiography (MRA). Here we present the first application of projection reconstruction to contrast enhanced MRA. In this research the limited angle projection reconstruction (PR) trajectory is implemented to acquire higher resolution images per unit time than with conventional Fourier transform (FT) imaging. It is well known that as FOV is reduced in conventional spin- warp imaging, higher resolution per unit time can be obtained, but aliasing may appear as a replication of outside material within the FOV. The limited angle PR acquisition also produces aliasing artifacts. This method produced artifacts which were unacceptable in X-ray CT but which appear to be tolerable in MR Angiography. Resolution throughout the FOV is determined by the projection readout resolution and not by the number of projections. As the number of projections is reduced, the resolution is unchanged, but low intensity artifacts appear. Here are presented the results of using limited angle PR in phantoms and contrast-enhanced angiograms of humans.
An improved image reconstruction method for optical intensity correlation Imaging
Gao, Xin; Feng, Lingjie; Li, Xiyu
2016-12-01
The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.
Prior image constrained image reconstruction in emerging computed tomography applications
Brunner, Stephen T.
Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation
Accelerated gradient methods for total-variation-based CT image reconstruction
Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian
2011-01-01
Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is very well suited for images with piecewise nearly constant regions. Computationally, however, TV-based....... In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former...... incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...
Image reconstruction using projections from a few views by discrete steering combined with DART
Kwon, Junghyun; Song, Samuel M.; Kauke, Brian; Boyd, Douglas P.
2012-03-01
In this paper, we propose an algebraic reconstruction technique (ART) based discrete tomography method to reconstruct an image accurately using projections from a few views. We specifically consider the problem of reconstructing an image of bottles filled with various types of liquids from X-ray projections. By exploiting the fact that bottles are usually filled with homogeneous material, we show that it is possible to obtain accurate reconstruction with only a few projections by an ART based algorithm. In order to deal with various types of liquids in our problem, we first introduce our discrete steering method which is a generalization of the binary steering approach for our proposed multi-valued discrete reconstruction. The main idea of the steering approach is to use slowly varying thresholds instead of fixed thresholds. We further improve reconstruction accuracy by reducing the number of variables in ART by combining our discrete steering with the discrete ART (DART) that fixes the values of interior pixels of segmented regions considered as reliable. By simulation studies, we show that our proposed discrete steering combined with DART yields superior reconstruction than both discrete steering only and DART only cases. The resulting reconstructions are quite accurate even with projections using only four views.
Homotopy Based Reconstruction from Acoustic Images
Sharma, Ojaswa
of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object......This thesis presents work in the direction of generating smooth surfaces from linear cross sections embedded in R2 and R3 using homotopy continuation. The methods developed in this research are generic and can be applied to higher dimensions as well. Two types of problems addressed in this research...
Singular value decomposition-based 2D image reconstruction for computed tomography.
Liu, Rui; He, Lu; Luo, Yan; Yu, Hengyong
2017-01-01
Singular value decomposition (SVD)-based 2D image reconstruction methods are developed and evaluated for a broad class of inverse problems for which there are no analytical solutions. The proposed methods are fast and accurate for reconstructing images in a non-iterative fashion. The multi-resolution strategy is adopted to reduce the size of the system matrix to reconstruct large images using limited memory capacity. A modified high-contrast Shepp-Logan phantom, a low-contrast FORBILD head phantom, and a physical phantom are employed to evaluate the proposed methods with different system configurations. The results show that the SVD methods can accurately reconstruct images from standard scan and interior scan projections and that they outperform other benchmark methods. The general SVD method outperforms the other SVD methods. The truncated SVD and Tikhonov regularized SVD methods accurately reconstruct a region-of-interest (ROI) from an internal scan with a known sub-region inside the ROI. Furthermore, the SVD methods are much faster and more flexible than the benchmark algorithms, especially in the ROI reconstructions in our experiments.
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors
Stanley H. Chan
2016-11-01
Full Text Available A quanta image sensor (QIS is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD cameras.
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors.
Chan, Stanley H; Elgendy, Omar A; Wang, Xiran
2016-11-22
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras.
Regularized Image Reconstruction for Ultrasound Attenuation Transmission Tomography
I. Peterlik
2008-06-01
Full Text Available The paper is focused on ultrasonic transmission tomography as a potential medical imaging modality, namely for breast cancer diagnosis. Ultrasound attenuation coefficient is one of the tissue parameters which are related to the pathological tissue state. A technique to reconstruct images of attenuation distribution is presented. Furthermore, an alternative to the commonly used filtered backprojection or algebraic reconstruction techniques is proposed. It is based on regularization of the image reconstruction problem which imposes smoothness in the resulting images while preserving edges. The approach is analyzed on synthetic data sets. The results show that it stabilizes the image restoration by compensating for main sources of estimation errors in this imaging modality.
Gibbs artifact reduction for POCS super-resolution image reconstruction
Chuangbai XIAO; Jing YU; Kaina SU
2008-01-01
The topic of super-resolution image reconstruc-tion has recently received considerable attention among the research community. Super-resolution image reconstruc-tion methods attempt to create a single high-resolution image from a number of low-resolution images (or a video sequence). The method of projections onto convex sets (POCS) for super-resolution image reconstruction attracts many researchers' attention. In this paper, we propose an improvement to reduce the amount of Gibbs artifacts pre-senting on the edges of the high-resolution image recon-structed by the POCS method. The proposed method weights the blur PSF centered at an edge pixel with an exponential function, and consequently decreases the coef-ficients of the PSF in the direction orthogonal to the edge. Experiment results show that the modification reduces effectively the visibility of Gibbs artifacts on edges and improves obviously the quality of the reconstructed high-resolution image.
Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)
McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian
2006-03-01
To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the
3D Reconstruction of Human Motion from Monocular Image Sequences.
Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo
2016-08-01
This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.
Neural net classification and LMS reconstruction to halftone images
Chang, Pao-Chi; Yu, Che-Sheng
1998-01-01
The objective of this work is to reconstruct high quality gray-level images from halftone images, or the inverse halftoning process. We develop high performance halftone reconstruction methods for several commonly used halftone techniques. For better reconstruction quality, image classification based on halftone techniques is placed before the reconstruction process so that the halftone reconstruction process can be fine tuned for each halftone technique. The classification is based on enhanced 1D correlation of halftone images and processed with a three- layer back propagation neural network. This classification method reached 100 percent accuracy with a limited set of images processed by dispersed-dot ordered dithering, clustered-dot ordered dithering, constrained average, and error diffusion methods in our experiments. For image reconstruction, we apply the least-mean-square adaptive filtering algorithm which intends to discover the optimal filter weights and the mask shapes. As a result, it yields very good reconstruction image quality. The error diffusion yields the best reconstructed quality among the halftone methods. In addition, the LMS method generates optimal image masks which are significantly different for each halftone method. These optimal masks can also be applied to more sophisticated reconstruction methods as the default filter masks.
Photogrammetric 3D reconstruction using mobile imaging
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
鞍形CT的感兴趣区图像重建%ROI-image Reconstruction for a Saddle Trajectory
夏丹; 余立锋; 邹宇
2005-01-01
Recently, we have developed a general formula for 3D cone-beam CT reconstruction, which can accommondate general, smooth trajectories. From the formula, algorithms can be derived for image reconstruction within a region of interest (ROI) from truncated data. In this work, we apply the derived backprojection filteration (BPF) algorithm and the minimum-data filtered backprojection (MD-FBP) algorithm to reconstructing ROI images from cone-beam projection data acquired with a saddle trajec-tory. Our numerical results in these studies demonstrate that the BPF and MD-FBP algorithms can accurately reconstruct ROI images from truncated data.
Model-Based Reconstructive Elasticity Imaging Using Ultrasound
Salavat R. Aglyamov
2007-01-01
Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.
CDP mapping and image reconstruction by using offset VSP data
无
2000-01-01
Because zero-offset VSP (Vertical Seismic Profile) data can only provide the information of rock properties and structure in the area around the Fresnel zone within the well, the scheme of VSP with offset was developed to acquire the reflection information away from the borehole in order to widen the range of VSP survey and to improve the precision of imaging.In this paper, we present a new CDP (Common Depth Point) mapping approach to image the reflecting structure by using offset VSP data. For the processing of offset VSP data, we firstly separated the up-going and down-going wave-fields from VSP data by means of F-K filtering technique, and we can calculate the mapping conditions (position and reflecting traveltime for CDP point) in homogeneous media, and then reconstruct the inner structure of the earth.This method is tested by using the offset VSP data which are used to simulate the case of super-deep borehole by means of finite-difference method. The imaged structure matches the real model very well. The results show that the method present here could accurately image the inner structure of the earth if the deviation of initial velocity model from the true model is less than 10%. Finally, we presented the imaged results for the real offset data by using this method.
Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study
Berveglieri, A.; Tommaselli, A. M. G.
2016-06-01
A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.
TREE STEM RECONSTRUCTION USING VERTICAL FISHEYE IMAGES: A PRELIMINARY STUDY
A. Berveglieri
2016-06-01
Full Text Available A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.
Undersampled MR Image Reconstruction with Data-Driven Tight Frame
Jianbo Liu; Shanshan Wang; Xi Peng; Dong Liang
2015-01-01
Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven ...
3D Reconstruction of NMR Images by LabVIEW
Peter IZAK
2007-01-01
Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.
Reconstruction of Optical Thickness from Hoffman Modulation Contrast Images
Olsen, Niels Holm; Sporring, Jon; Nielsen, Mads;
2003-01-01
Hoffman microscopy imaging systems are part of numerous fertility clinics world-wide. We discuss the physics of the Hoffman imaging system from optical thickness to image intensity, implement a simple, yet fast, reconstruction algorithm using Fast Fourier Transformation and discuss the usability...... of the method on a number of cells from a human embryo. Novelty is identifying the non-linearity of a typical Hoffman imaging system, and the application of Fourier Transformation to reconstruct the optical thickness....
Tan, Tien Jin, E-mail: tien_jin_tan@cgh.com.sg [Department of Radiology, Vancouver General Hospital, Vancouver, BC (Canada); Aljefri, Ahmad M. [Department of Radiology, Vancouver General Hospital, Vancouver, BC (Canada); Clarkson, Paul W.; Masri, Bassam A. [Department of Orthopaedics, University of British Columbia, Vancouver, BC (Canada); Ouellette, Hugue A.; Munk, Peter L.; Mallinson, Paul I. [Department of Radiology, Vancouver General Hospital, Vancouver, BC (Canada)
2015-09-15
Highlights: • Advances in reconstructive orthopaedic techniques now allow for limb salvage and prosthetic reconstruction procedures to be performed on patients who would otherwise be required to undergo debilitating limb amputations for malignant bone tumours. • The resulting post-operative imaging of such cases can be daunting for the radiologist to interpret, particularly in the presence of distorted anatomy and unfamiliar hardware. • This article reviews the indications for limb salvage surgery, prosthetic reconstruction devices involved, expected post-operative imaging findings, as well as the potential hardware related complications that may be encountered in the management of such cases. • By being aware of the various types of reconstructive techniques used in limb salvage surgery as well as the potential complications, the reporting radiologist should possess greater confidence in making an accurate assessment of the expected post-operative imaging findings in the management of such cases. - Abstract: Advances in reconstructive orthopaedic techniques now allow for limb salvage and prosthetic reconstruction procedures to be performed on patients who would otherwise be required to undergo debilitating limb amputations for malignant bone tumours. The resulting post-operative imaging of such cases can be daunting for the radiologist to interpret, particularly in the presence of distorted anatomy and unfamiliar hardware. This article reviews the indications for limb salvage surgery, prosthetic reconstruction devices involved, expected post-operative imaging findings, as well as the potential hardware related complications that may be encountered in the management of such cases.
Image Reconstruction Using a Genetic Algorithm for Electrical Capacitance Tomography
MOU Changhua; PENG Lihui; YAO Danya; XIAO Deyun
2005-01-01
Electrical capacitance tomography (ECT) has been used for more than a decade for imaging dielectric processes. However, because of its ill-posedness and non-linearity, ECT image reconstruction has always been a challenge. A new genetic algorithm (GA) developed for ECT image reconstruction uses initial results from a linear back-projection, which is widely used for ECT image reconstruction to optimize the threshold and the maximum and minimum gray values for the image. The procedure avoids optimizing the gray values pixel by pixel and significantly reduces the search space dimension. Both simulations and static experimental results show that the method is efficient and capable of reconstructing high quality images. Evaluation criteria show that the GA-based method has smaller image error and greater correlation coefficients. In addition, the GA-based method converges quickly with a small number of iterations.
Three-dimensional microscopy and sectional image reconstruction using optical scanning holography.
Lam, Edmund Y; Zhang, Xin; Vo, Huy; Poon, Ting-Chung; Indebetouw, Guy
2009-12-01
Fast acquisition and high axial resolution are two primary requirements for three-dimensional microscopy. However, they are sometimes conflicting: imaging modalities such as confocal imaging can deliver superior resolution at the expense of sequential acquisition at different axial planes, which is a time-consuming process. Optical scanning holography (OSH) promises to deliver a good trade-off between these two goals. With just a single scan, we can capture the entire three-dimensional volume in a digital hologram; the data can then be processed to obtain the individual sections. An accurate modeling of the imaging system is key to devising an appropriate image reconstruction algorithm, especially for real data where random noise and other imaging imperfections must be taken into account. In this paper we demonstrate sectional image reconstruction by applying an inverse imaging sectioning technique to experimental OSH data of biological specimens and visualizing the sections using the OSA Interactive Science Publishing software.
Accelerated gradient methods for total-variation-based CT image reconstruction
Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology
2011-07-01
Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)
Accelerated gradient methods for total-variation-based CT image reconstruction
Jørgensen, Jakob Heide; Hansen, Per Christian; Jensen, Søren Holdt; Sidky, Emil Y; Pan, Xiaochuan
2011-01-01
Total-variation (TV)-based Computed Tomography (CT) image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is very well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is much more demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large-scale systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits slow convergence. In the present work we consider the use of two accelerated gradient-based methods, GPBB and UP...
Reconstruction of biofilm images: combining local and global structural parameters
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-10-20
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.
Undersampled MR Image Reconstruction with Data-Driven Tight Frame
Jianbo Liu
2015-01-01
Full Text Available Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI.
Undersampled MR Image Reconstruction with Data-Driven Tight Frame.
Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong
2015-01-01
Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI) method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI.
Imaging tests for accurate diagnosis of acute biliary pancreatitis.
Şurlin, Valeriu; Săftoiu, Adrian; Dumitrescu, Daniela
2014-11-28
Gallstones represent the most frequent aetiology of acute pancreatitis in many statistics all over the world, estimated between 40%-60%. Accurate diagnosis of acute biliary pancreatitis (ABP) is of outmost importance because clearance of lithiasis [gallbladder and common bile duct (CBD)] rules out recurrences. Confirmation of biliary lithiasis is done by imaging. The sensitivity of the ultrasonography (US) in the detection of gallstones is over 95% in uncomplicated cases, but in ABP, sensitivity for gallstone detection is lower, being less than 80% due to the ileus and bowel distension. Sensitivity of transabdominal ultrasonography (TUS) for choledocolithiasis varies between 50%-80%, but the specificity is high, reaching 95%. Diameter of the bile duct may be orientative for diagnosis. Endoscopic ultrasonography (EUS) seems to be a more effective tool to diagnose ABP rather than endoscopic retrograde cholangiopancreatography (ERCP), which should be performed only for therapeutic purposes. As the sensitivity and specificity of computerized tomography are lower as compared to state-of-the-art magnetic resonance cholangiopancreatography (MRCP) or EUS, especially for small stones and small diameter of CBD, the later techniques are nowadays preferred for the evaluation of ABP patients. ERCP has the highest accuracy for the diagnosis of choledocholithiasis and is used as a reference standard in many studies, especially after sphincterotomy and balloon extraction of CBD stones. Laparoscopic ultrasonography is a useful tool for the intraoperative diagnosis of choledocholithiasis. Routine exploration of the CBD in cases of patients scheduled for cholecystectomy after an attack of ABP was not proven useful. A significant rate of the so-called idiopathic pancreatitis is actually caused by microlithiasis and/or biliary sludge. In conclusion, the general algorithm for CBD stone detection starts with anamnesis, serum biochemistry and then TUS, followed by EUS or MRCP. In the end
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
Zwick, D.; Sakhaee, E.; Balachandar, S.; Entezari, A.
2017-10-01
Multiphase flow simulation serves a vital purpose in applications as diverse as engineering design, natural disaster prediction, and even study of astrophysical phenomena. In these scenarios, it can be very difficult, expensive, or even impossible to fully represent the physical system under consideration. Even still, many such real-world applications can be modeled as a two-phase flow containing both continuous and dispersed phases. Consequentially, the continuous phase is thought of as a fluid and the dispersed phase as particles. The continuous phase is typically treated in the Eulerian frame of reference and represented on a fixed grid, while the dispersed phase is treated in the Lagrangian frame and represented by a sample distribution of Lagrangian particles that approximate a cloud. Coupling between the phases requires interpolation of the continuous phase properties at the locations of the Lagrangian particles. This interpolation step is straightforward and can be performed at higher order accuracy. The reverse process of projecting the Lagrangian particle properties from the sample points to the Eulerian grid is complicated by the time-dependent non-uniform distribution of the Lagrangian particles. In this paper we numerically examine three reconstruction, or projection, methods: (i) direct summation (DS), (ii) least-squares, and (iii) sparse approximation. We choose a continuous representation of the dispersed phase property that is systematically varied from a simple single mode periodic signal to a more complex artificially constructed turbulent signal to see how each method performs in reconstruction. In these experiments, we show that there is a link between the number of dispersed Lagrangian sample points and the number of structured grid points to accurately represent the underlying functional representation to machine accuracy. The least-squares method outperforms the other methods in most cases, while the sparse approximation method is able to
Muralidhar Mupparapu
2013-01-01
Full Text Available This review is the first of series of CBCT. Multiplanar reconstructions for continuing education in three dimensional head and neck anatomy. This review gives the reader the needed anatomical references and clinical relevance for accurate interpretation of CBCT anatomy. The information is useful to all dental clinicians. All images are labeled and complete with legends. Only bone window settings are used for display of the CBCT images. The selected slices are displayed at a resolution of 300 micrometers.
Krings, Thomas; Mauerhofer, Eric
2011-06-01
This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.
Sub- Angstrom microscopy through incoherent imaging and image reconstruction
Pennycook, S.J.; Jesson, D.E.; Chisholm, M.F. (Oak Ridge National Lab., TN (United States)); Ferridge, A.G.; Seddon, M.J. (Wellcome Research Lab., Beckenham (United Kingdom))
1992-03-01
Z-contrast scanning transmission electron microscopy (STEM) with a high-angle annular detector breaks the coherence of the imaging process, and provides an incoherent image of a crystal projection. Even in the presence of strong dynamical diffraction, the image can be accurately described as a convolution between an object function, sharply peaked at the projected atomic sites, and the probe intensity profile. Such an image can be inverted intuitively without the need for model structures, and therefore provides the important capability to reveal unanticipated interfacial arrangements. It represents a direct image of the crystal projection, revealing the location of the atomic columns and their relative high-angle scattering power. Since no phase is associated with a peak in the object function or the contrast transfer function, extension to higher resolution is also straightforward. Image restoration techniques such as maximum entropy, in conjunction with the 1.3 {Angstrom} probe anticipated for a 300 kV STEM, appear to provide a simple and robust route to the achievement of sub-{Angstrom} resolution electron microscopy.
Sub-{Angstrom} microscopy through incoherent imaging and image reconstruction
Pennycook, S.J.; Jesson, D.E.; Chisholm, M.F. [Oak Ridge National Lab., TN (United States); Ferridge, A.G.; Seddon, M.J. [Wellcome Research Lab., Beckenham (United Kingdom)
1992-03-01
Z-contrast scanning transmission electron microscopy (STEM) with a high-angle annular detector breaks the coherence of the imaging process, and provides an incoherent image of a crystal projection. Even in the presence of strong dynamical diffraction, the image can be accurately described as a convolution between an object function, sharply peaked at the projected atomic sites, and the probe intensity profile. Such an image can be inverted intuitively without the need for model structures, and therefore provides the important capability to reveal unanticipated interfacial arrangements. It represents a direct image of the crystal projection, revealing the location of the atomic columns and their relative high-angle scattering power. Since no phase is associated with a peak in the object function or the contrast transfer function, extension to higher resolution is also straightforward. Image restoration techniques such as maximum entropy, in conjunction with the 1.3 {Angstrom} probe anticipated for a 300 kV STEM, appear to provide a simple and robust route to the achievement of sub-{Angstrom} resolution electron microscopy.
Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm
Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu
2011-01-01
The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.
Sparse Reconstruction Schemes for Nonlinear Electromagnetic Imaging
Desmal, Abdulla
2016-03-01
synthetically generated or actually measured scattered fields, show that the images recovered by these sparsity-regularized methods are sharper and more accurate than those produced by existing methods. The methods developed in this work have potential application areas ranging from oil/gas reservoir engineering to biological imaging where sparse domains naturally exist.
Quantitative image quality evaluation for cardiac CT reconstructions
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.
2016-03-01
Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.
Basis Functions in Image Reconstruction From Projections: A Tutorial Introduction
Herman, Gabor T.
2015-11-01
The series expansion approaches to image reconstruction from projections assume that the object to be reconstructed can be represented as a linear combination of fixed basis functions and the task of the reconstruction algorithm is to estimate the coefficients in such a linear combination based on the measured projection data. It is demonstrated that using spherically symmetric basis functions (blobs), instead of ones based on the more traditional pixels, yields superior reconstructions of medically relevant objects. The demonstration uses simulated computerized tomography projection data of head cross-sections and the series expansion method ART for the reconstruction. In addition to showing the results of one anecdotal example, the relative efficacy of using pixel and blob basis functions in image reconstruction from projections is also evaluated using a statistical hypothesis testing based task oriented comparison methodology. The superiority of the efficacy of blob basis functions over that of pixel basis function is found to be statistically significant.
Kudrolli, Haris A.
2001-04-01
A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates
Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging.
Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan
2016-10-24
Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l₁-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging.
Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging
Yichun Zhang
2016-10-01
Full Text Available Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l1-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging.
Chang, L.T.
1976-05-01
Two techniques for radionuclide imaging and reconstruction have been studied;; both are used for improvement of depth resolution. The first technique is called coded aperture imaging, which is a technique of tomographic imaging. The second technique is a special 3-D image reconstruction method which is introduced as an improvement to the so called focal-plane tomography. (auth)
Rau, Urvashi; Owen, Frazer N
2016-01-01
Many deep wide-band wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices and polarization properties of faint source populations. In this paper we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2GHz)) and 46-pointing mosaic(D-array, C-Band (4-8GHz)) JVLA observation using realistic brightness distribution ranging from $1\\mu$Jy to $100m$Jy and time-,frequency-, polarization- and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as MT-MFS) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image ...
Image Reconstruction Algorithm for Electrical Charge Tomography System
M. F. Rahmat
2010-01-01
Full Text Available Problem statement: Many problems in scientific computing can be formulated as inverse problem. A vast majority of these problems are ill-posed problems. In Electrical Charge Tomography (EChT, normally the sensitivity matrix generated from forward modeling is very ill-condition. This condition posts difficulties to the inverse problem solution especially in the accuracy and stability of the image being reconstructed. The objective of this study is to reconstruct the image cross-section of the material in pipeline gravity dropped mode conveyor as well to solve the ill-condition of matrix sensitivity. Approach: Least Square with Regularization (LSR method had been introduced to reconstruct the image and the electrodynamics sensor was used to capture the data that installed around the pipe. Results: The images were validated using digital imaging technique and Singular Value Decomposition (SVD method. The results showed that image reconstructed by this method produces a good promise in terms of accuracy and stability. Conclusion: This implied that LSR method provides good and promising result in terms of accuracy and stability of the image being reconstructed. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.
Application of particle filtering algorithm in image reconstruction of EMT
Wang, Jingwen; Wang, Xu
2015-07-01
To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.
Visions of Reconstruction: Layers of Moving Images
Paalman, Floris Jan Willem
2015-01-01
abstractAfter WWII, films accompanied the reconstruction of Europe’s destroyed cities. Many contained historical footage. How was this material used, to articulate visions of reconstruction, what happened to the material later on, and how do the films relate to municipal film archives? This question
Fully Automatic 3D Reconstruction of Histological Images
Bagci, Ulas
2009-01-01
In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.
Rahmim, Arman; Tang, Jing; Zaidi, Habib
2009-08-01
In this article, the authors review novel techniques in the emerging field of spatiotemporal four-dimensional (4D) positron emission tomography (PET) image reconstruction. The conventional approach to dynamic PET imaging, involving independent reconstruction of individual PET frames, can suffer from limited temporal resolution, high noise (especially when higher frame sampling is introduced to better capture fast dynamics), as well as complex reconstructed image noise distributions that can be very difficult and time consuming to model in kinetic parameter estimation tasks. Various approaches that seek to address some or all of these limitations are described, including techniques that utilize (a) iterative temporal smoothing, (b) advanced temporal basis functions, (c) principal components transformation of the dynamic data, (d) wavelet-based techniques, as well as (e) direct kinetic parameter estimation methods. Future opportunities and challenges with regards to the adoption of 4D and higher dimensional image reconstruction techniques are also outlined.
MREJ: MRE elasticity reconstruction on ImageJ.
Xiang, Kui; Zhu, Xia Li; Wang, Chang Xin; Li, Bing Nan
2013-08-01
Magnetic resonance elastography (MRE) is a promising method for health evaluation and disease diagnosis. It makes use of elastic waves as a virtual probe to quantify soft tissue elasticity. The wave actuator, imaging modality and elasticity interpreter are all essential components for an MRE system. Efforts have been made to develop more effective actuating mechanisms, imaging protocols and reconstructing algorithms. However, translating MRE wave images into soft tissue elasticity is a nontrivial issue for health professionals. This study contributes an open-source platform - MREJ - for MRE image processing and elasticity reconstruction. It is established on the widespread image-processing program ImageJ. Two algorithms for elasticity reconstruction were implemented with spatiotemporal directional filtering. The usability of the method is shown through virtual palpation on different phantoms and patients. Based on the results, we conclude that MREJ offers the MRE community a convenient and well-functioning program for image processing and elasticity interpretation.
Sparsity-constrained PET image reconstruction with learned dictionaries
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
An adaptive reconstruction algorithm for spectral CT regularized by a reference image
Wang, Miaoshi; Zhang, Yanbo; Liu, Rui; Guo, Shuxu; Yu, Hengyong
2016-12-01
The photon counting detector based spectral CT system is attracting increasing attention in the CT field. However, the spectral CT is still premature in terms of both hardware and software. To reconstruct high quality spectral images from low-dose projections, an adaptive image reconstruction algorithm is proposed that assumes a known reference image (RI). The idea is motivated by the fact that the reconstructed images from different spectral channels are highly correlated. If a high quality image of the same object is known, it can be used to improve the low-dose reconstruction of each individual channel. This is implemented by maximizing the patch-wise correlation between the object image and the RI. Extensive numerical simulations and preclinical mouse study demonstrate the feasibility and merits of the proposed algorithm. It also performs well for truncated local projections, and the surrounding area of the region- of-interest (ROI) can be more accurately reconstructed. Furthermore, a method is introduced to adaptively choose the step length, making the algorithm more feasible and easier for applications.
Surface Reconstruction and Image Enhancement via $L^1$-Minimization
Dobrev, Veselin
2010-01-01
A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Onyango, F. A.; Nex, F.; Peter, M. S.; Jende, P.
2017-05-01
Unmanned Aerial Vehicles (UAVs) have gained popularity in acquiring geotagged, low cost and high resolution images. However, the images acquired by UAV-borne cameras often have poor georeferencing information, because of the low quality on-board Global Navigation Satellite System (GNSS) receiver. In addition, lightweight UAVs have a limited payload capacity to host a high quality on-board Inertial Measurement Unit (IMU). Thus, orientation parameters of images acquired by UAV-borne cameras may not be very accurate. Poorly georeferenced UAV images can be correctly oriented using accurately oriented airborne images capturing a similar scene by finding correspondences between the images. This is not a trivial task considering the image pairs have huge variations in scale, perspective and illumination conditions. This paper presents a procedure to successfully register UAV and aerial oblique imagery. The proposed procedure implements the use of the AKAZE interest operator for feature extraction in both images. Brute force is implemented to find putative correspondences and later on Lowe's ratio test (Lowe, 2004) is used to discard a significant number of wrong matches. In order to filter out the remaining mismatches, the putative correspondences are used in the computation of multiple homographies, which aid in the reduction of outliers significantly. In order to increase the number and improve the quality of correspondences, the impact of pre-processing the images using the Wallis filter (Wallis, 1974) is investigated. This paper presents the test results of different scenarios and the respective accuracies compared to a manual registration of the finally computed fundamental and essential matrices that encode the orientation parameters of the UAV images with respect to the aerial images.
F. A. Onyango
2017-05-01
Full Text Available Unmanned Aerial Vehicles (UAVs have gained popularity in acquiring geotagged, low cost and high resolution images. However, the images acquired by UAV-borne cameras often have poor georeferencing information, because of the low quality on-board Global Navigation Satellite System (GNSS receiver. In addition, lightweight UAVs have a limited payload capacity to host a high quality on-board Inertial Measurement Unit (IMU. Thus, orientation parameters of images acquired by UAV-borne cameras may not be very accurate. Poorly georeferenced UAV images can be correctly oriented using accurately oriented airborne images capturing a similar scene by finding correspondences between the images. This is not a trivial task considering the image pairs have huge variations in scale, perspective and illumination conditions. This paper presents a procedure to successfully register UAV and aerial oblique imagery. The proposed procedure implements the use of the AKAZE interest operator for feature extraction in both images. Brute force is implemented to find putative correspondences and later on Lowe’s ratio test (Lowe, 2004 is used to discard a significant number of wrong matches. In order to filter out the remaining mismatches, the putative correspondences are used in the computation of multiple homographies, which aid in the reduction of outliers significantly. In order to increase the number and improve the quality of correspondences, the impact of pre-processing the images using the Wallis filter (Wallis, 1974 is investigated. This paper presents the test results of different scenarios and the respective accuracies compared to a manual registration of the finally computed fundamental and essential matrices that encode the orientation parameters of the UAV images with respect to the aerial images.
Dynamic Data Updating Algorithm for Image Superresolution Reconstruction
TAN Bing; XU Qing; ZHANG Yan; XING Shuai
2006-01-01
A dynamic data updating algorithm for image superesolution is proposed. On the basis of Delaunay triangulation and its local updating property, this algorithm can update the changed region directly under the circumstances that only a part of the source images has been changed. For its high efficiency and adaptability, this algorithm can serve as a fast algorithm for image superesolution reconstruction.
Hellebust, Taran Paulsen [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Tanderup, Kari [Department of Oncology, Aarhus University Hospital, Aarhus (Denmark); Bergstrand, Eva Stabell [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Knutsen, Bjoern Helge [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Roeislien, Jo [Section of Biostatistics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Olsen, Dag Rune [Institute for Cancer Research, Rikshospital-Radiumhospital Medical Center, Oslo (Norway)
2007-08-21
The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy.
Park, Sang Joon; Kim, Tae Jung; Kim, Kwang Gi; Lee, Sang Ho; Goo, Jin Mo; Kim, Jong Hyo
2008-03-01
Airway wall thickness (AWT) is an important bio-marker for evaluation of pulmonary diseases such as chronic bronchitis, bronchiectasis. While an image-based analysis of the airway tree can provide precise and valuable airway size information, quantitative measurement of AWT in Multidetector-Row Computed Tomography (MDCT) images involves various sources of error and uncertainty. So we have developed an accurate AWT measurement technique for small airways with three-dimensional (3-D) approach. To evaluate performance of these techniques, we used a set of acryl tube phantom was made to mimic small airways to have three different sizes of wall diameter (4.20, 1.79, 1.24 mm) and wall thickness (1.84, 1.22, 0.67 mm). The phantom was imaged with MDCT using standard reconstruction kernel (Sensation 16, Siemens, Erlangen). The pixel size was 0.488 mm × 0.488 mm × 0.75 mm in x, y, and z direction respectively. The images were magnified in 5 times using cubic B-spline interpolation, and line profiles were obtained for each tube. To recover faithful line profile from the blurred images, the line profiles were deconvolved with a point spread kernel of the MDCT which was estimated using the ideal tube profile and image line profile. The inner diameter, outer diameter, and wall thickness of each tube were obtained with full-width-half-maximum (FWHM) method for the line profiles before and after deconvolution processing. Results show that significant improvement was achieved over the conventional FWHM method in the measurement of AWT.
Accurate Image Retrieval Algorithm Based on Color and Texture Feature
Chunlai Yan
2013-06-01
Full Text Available Content-Based Image Retrieval (CBIR is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature of the image, CBIR aims to find images that contain specified content (feature in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.
Park, Min K.; Lee, Eun S.; Park, Jin Y.; Kang, Moon Gi; Kim, Jaihie
2002-02-01
The demand for high-resolution images is gradually increasing, whereas many imaging systems have been designed to enable a certain level of aliasing during image acquisition. In this sense, digital image processing approaches have recently been investigated to reconstruct a high-resolution image from aliased low-resolution images. However, since the subpixel motion information is assumed to be accurate in most conventional approaches, the satisfactory high-resolution image cannot be obtained when the subpixel motion information is inaccurate. Hence, we propose a new algorithm to reduce the distortion in the reconstructed high-resolution image due to the inaccuracy of subpixel motion information. For this purpose, we analyze the effect of inaccurate subpixel motion information on a high-resolution image reconstruction, and model it as zero-mean additive Gaussian errors added respectively to each low- resolution image. To reduce the distortion, we apply the modified multichannel image deconvolution approach to the problem. The validity of the proposed algorithm is demonstrated both theoretically and experimentally.
Acoustic imaging for temperature distribution reconstruction
Jia, Ruixi; Xiong, Qingyu; Liang, Shan
2016-12-01
For several industrial processes, such as burning and drying, temperature distribution is important because it can reflect the internal running state of industrial equipment and assist to develop control strategy and ensure safety in operation of industrial equipment. The principle of this technique is mainly based on the relationship between acoustic velocity and temperature. In this paper, an algorithm for temperature distribution reconstruction is considered. Compared with reconstruction results of simulation experiments with the least square algorithm and the proposed one, the latter indicates a better information reflection of temperature distribution and relatively higher reconstruction accuracy.
Acoustic imaging for temperature distribution reconstruction
Ruixi Jia
2016-12-01
Full Text Available For several industrial processes, such as burning and drying, temperature distribution is important because it can reflect the internal running state of industrial equipment and assist to develop control strategy and ensure safety in operation of industrial equipment. The principle of this technique is mainly based on the relationship between acoustic velocity and temperature. In this paper, an algorithm for temperature distribution reconstruction is considered. Compared with reconstruction results of simulation experiments with the least square algorithm and the proposed one, the latter indicates a better information reflection of temperature distribution and relatively higher reconstruction accuracy.
Image reconstruction in optical interferometry: Benchmarking the regularization
Renard, Stéphanie; Malbet, Fabien
2011-01-01
With the advent of infrared long-baseline interferometers with more than two telescopes, both the size and the completeness of interferometric data sets have significantly increased, allowing images based on models with no a priori assumptions to be reconstructed. Our main objective is to analyze the multiple parameters of the image reconstruction process with particular attention to the regularization term and the study of their behavior in different situations. The secondary goal is to derive practical rules for the users. Using the Multi-aperture image Reconstruction Algorithm (MiRA), we performed multiple systematic tests, analyzing 11 regularization terms commonly used. The tests are made on different astrophysical objects, different (u,v) plane coverages and several signal-to-noise ratios to determine the minimal configuration needed to reconstruct an image. We establish a methodology and we introduce the mean-square errors (MSE) to discuss the results. From the ~24000 simulations performed for the benc...
Compressed sensing sparse reconstruction for coherent field imaging
Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen
2016-04-01
Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).
Compressed Sensing Inspired Image Reconstruction from Overlapped Projections
Lin Yang
2010-01-01
Full Text Available The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP algorithms cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS- based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV. Then, we demonstrated the feasibility of this algorithm in numerical tests.
Robust sparse image reconstruction of radio interferometric observations with PURIFY
Pratley, Luke; d'Avezac, Mayeul; Carrillo, Rafael E; Onose, Alexandru; Wiaux, Yves
2016-01-01
Next-generation radio interferometers, such as the Square Kilometre Array (SKA), will revolutionise our understanding of the universe through their unprecedented sensitivity and resolution. However, to realise these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN and its variants, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed interferometric images that are limited in quality and they are not scalable for big data. In this work we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers (P-ADMM) algorithm presented in a recen...
Electromagnetic tomography (EMT): image reconstruction based on the inverse problem
无
2003-01-01
Starting from Maxwell's equations for inhomogeneous media, nonlinear integral equations of the inverse problem of the electromagnetic tomography (EMT) are derived, whose kernel is the dyadic Green's function for the EMT sensor with a homogeneous medium in the object space. Then in terms of ill-posedness of the inverse problem, a Tikhonov-type regularization model is established based on a linearization-approximation of the nonlinear inverse problem. Finally, an iterative algorithm of image reconstruction based on the inverse problem and reconstruction images of some object flows for simplified sensor are given. Initial results of the image reconstruction show that the algorithm based on the inverse problem is superior to those based on the linear back-projection in the quality of image reconstruction.
Online reconstruction of 3D magnetic particle imaging data
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Wahbeh, W.; Nebiker, S.; Fangi, G.
2016-06-01
This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
Advanced photoacoustic image reconstruction using the k-Wave toolbox
Treeby, B. E.; Jaros, J.; Cox, B. T.
2016-03-01
Reconstructing images from measured time domain signals is an essential step in tomography-mode photoacoustic imaging. However, in practice, there are many complicating factors that make it difficult to obtain high-resolution images. These include incomplete or undersampled data, filtering effects, acoustic and optical attenuation, and uncertainties in the material parameters. Here, the processing and image reconstruction steps routinely used by the Photoacoustic Imaging Group at University College London are discussed. These include correction for acoustic and optical attenuation, spatial resampling, material parameter selection, image reconstruction, and log compression. The effect of each of these steps is demonstrated using a representative in vivo dataset. All of the algorithms discussed form part of the open-source k-Wave toolbox (available from http://www.k-wave.org).
Application of mathematical modelling methods for acoustic images reconstruction
Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.
2016-04-01
The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Landweber Iterative Methods for Angle-limited Image Reconstruction
Gang-rong Qu; Ming Jiang
2009-01-01
We introduce a general itcrative scheme for angle-limited image reconstruction based on Landwe-ber's method. We derive a representation formula for this scheme and consequently establish its convergence conditions. Our results suggest certain relaxation strategies for an accelerated convergcnce for angle-limited im-age reconstruction in L2-norm comparing with alternative projection methods. The convolution-backprojection algorithm is given for this iterative process.
Three-dimensional surface reconstruction from multistatic SAR images.
Rigling, Brian D; Moses, Randolph L
2005-08-01
This paper discusses reconstruction of three-dimensional surfaces from multiple bistatic synthetic aperture radar (SAR) images. Techniques for surface reconstruction from multiple monostatic SAR images already exist, including interferometric processing and stereo SAR. We generalize these methods to obtain algorithms for bistatic interferometric SAR and bistatic stereo SAR. We also propose a framework for predicting the performance of our multistatic stereo SAR algorithm, and, from this framework, we suggest a metric for use in planning strategic deployment of multistatic assets.
Reconstruction of images from radiofrequency electron paramagnetic resonance spectra.
Smith, C M; Stevens, A D
1994-12-01
This paper discusses methods for obtaining image reconstructions from electron paramagnetic resonance (EPR) spectra which constitute object projections. An automatic baselining technique is described which treats each spectrum consistently; rotating the non-horizontal baselines which are caused by stray magnetic effects onto the horizontal axis. The convolved backprojection method is described for both two- and three-dimensional reconstruction and the effect of cut-off frequency on the reconstruction is illustrated. A slower, indirect, iterative method, which does a non-linear fit to the projection data, is shown to give a far smoother reconstructed image when the method of maximum entropy is used to determine the value of the final residual sum of squares. Although this requires more computing time than the convolved backprojection method, it is more flexible and overcomes the problem of numerical instability encountered in deconvolution. Images from phantom samples in vitro are discussed. The spectral data for these have been accumulated quickly and have a low signal-to-noise ratio. The results show that as few as 16 spectra can still be processed to give an image. Artifacts in the image due to a small number of projections using the convolved backprojection reconstruction method can be removed by applying a threshold, i.e. only plotting contours higher than a given value. These artifacts are not present in an image which has been reconstructed by the maximum entropy technique. At present these techniques are being applied directly to in vivo studies.
Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face Recognition
Rongbing Huang
2016-01-01
Full Text Available Based on a special type of denoising autoencoder (DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition (FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template (ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted/clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called “bottleneck” neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.
Wang, A. S.; Stayman, J. W.; Otake, Y.; Khanna, A. J.; Gallia, G. L.; Siewerdsen, J. H.
2014-03-01
Purpose: A new method for accurately portraying the impact of low-dose imaging techniques in C-arm cone-beam CT (CBCT) is presented and validated, allowing identification of minimum-dose protocols suitable to a given imaging task on a patient-specific basis in scenarios that require repeat intraoperative scans. Method: To accurately simulate lower-dose techniques and account for object-dependent noise levels (x-ray quantum noise and detector electronics noise) and correlations (detector blur), noise of the proper magnitude and correlation was injected into the projections from an initial CBCT acquired at the beginning of a procedure. The resulting noisy projections were then reconstructed to yield low-dose preview (LDP) images that accurately depict the image quality at any level of reduced dose in both filtered backprojection and statistical image reconstruction. Validation studies were conducted on a mobile C-arm, with the noise injection method applied to images of an anthropomorphic head phantom and cadaveric torso across a range of lower-dose techniques. Results: Comparison of preview and real CBCT images across a full range of techniques demonstrated accurate noise magnitude (within ~5%) and correlation (matching noise-power spectrum, NPS). Other image quality characteristics (e.g., spatial resolution, contrast, and artifacts associated with beam hardening and scatter) were also realistically presented at all levels of dose and across reconstruction methods, including statistical reconstruction. Conclusion: Generating low-dose preview images for a broad range of protocols gives a useful method to select minimum-dose techniques that accounts for complex factors of imaging task, patient-specific anatomy, and observer preference. The ability to accurately simulate the influence of low-dose acquisition in statistical reconstruction provides an especially valuable means of identifying low-dose limits in a manner that does not rely on a model for the nonlinear
Fast MR Spectroscopic Imaging Technologies and Data Reconstruction Methods
HUANGMin; LUSong-tao; LINJia-rui; ZHANYing-jian
2004-01-01
MRSI plays a more and more important role in clinical application. In this paper, we compare several fast MRSI technologies and data reconstruction methods. For the conventional phase encoding MRSI, the data reconstruction using FFT is simple. But the data acquisition is very time consuming and thus prohibitive in clinical settings. Up to now, the MRSI technologies based on echo-planar, spiral trajectories and sensitivity encoding are the fastest in data acquisition, but their data reconstruction is complex. EPSI reconstruction uses shift of odd and even echoes. Spiral SI uses gridding FFT. SENSE-SI, a new approach to reducing the acquisition time, uses the distinct spatial sensitivities of the individual coil elements to recover the missing encoding information. These improvements in data acquisition and image reconstruction provide a potential value of metabolic imaging as a clinical tool.
A novel building boundary reconstruction method based on lidar data and images
Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian
2013-09-01
Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.
Program package for accurate 3D field reconstruction from boundary measurements
Artyukh, A G; Belyakova, T V
2002-01-01
The problem of the magnetic field reconstruction inside a subregion in R sup 3 from magnetic measurements on the closed boundary of this subregion is considered. The efficiency of the proposed method, algorithm and associated software for the precision magnet system is discussed. The results of the software verification, numerical experiments as well as the ones of the field reconstruction using boundary measurements in the magnet M1 of the separator COMBAS are given. Requirements to the position accuracy of sensors consistent with the required accuracy of the magnetic field reconstruction are defined. Recommendations on the magnetic scheme design for the field mapping are given.
Eli Gibson
2013-01-01
Full Text Available Background: Guidelines for localizing prostate cancer on imaging are ideally informed by registered post-prostatectomy histology. 3D histology reconstruction methods can support this by reintroducing 3D spatial information lost during histology processing. The need to register small, high-grade foci drives a need for high accuracy. Accurate 3D reconstruction method design is impacted by the answers to the following central questions of this work. (1 How does prostate tissue deform during histology processing? (2 What spatial misalignment of the tissue sections is induced by microtome cutting? (3 How does the choice of reconstruction model affect histology reconstruction accuracy? Materials and Methods: Histology, paraffin block face and magnetic resonance images were acquired for 18 whole mid-gland tissue slices from six prostates. 7-15 homologous landmarks were identified on each image. Tissue deformation due to histology processing was characterized using the target registration error (TRE after landmark-based registration under four deformation models (rigid, similarity, affine and thin-plate-spline [TPS]. The misalignment of histology sections from the front faces of tissue slices was quantified using manually identified landmarks. The impact of reconstruction models on the TRE after landmark-based reconstruction was measured under eight reconstruction models comprising one of four deformation models with and without constraining histology images to the tissue slice front faces. Results: Isotropic scaling improved the mean TRE by 0.8-1.0 mm (all results reported as 95% confidence intervals, while skew or TPS deformation improved the mean TRE by <0.1 mm. The mean misalignment was 1.1-1.9 (angle and 0.9-1.3 mm (depth. Using isotropic scaling, the front face constraint raised the mean TRE by 0.6-0.8 mm. Conclusions: For sub-millimeter accuracy, 3D reconstruction models should not constrain histology images to the tissue slice front faces and
Gibson, Eli; Gaed, Mena; Gómez, José A.; Moussa, Madeleine; Pautler, Stephen; Chin, Joseph L.; Crukley, Cathie; Bauman, Glenn S.; Fenster, Aaron; Ward, Aaron D.
2013-01-01
Background: Guidelines for localizing prostate cancer on imaging are ideally informed by registered post-prostatectomy histology. 3D histology reconstruction methods can support this by reintroducing 3D spatial information lost during histology processing. The need to register small, high-grade foci drives a need for high accuracy. Accurate 3D reconstruction method design is impacted by the answers to the following central questions of this work. (1) How does prostate tissue deform during histology processing? (2) What spatial misalignment of the tissue sections is induced by microtome cutting? (3) How does the choice of reconstruction model affect histology reconstruction accuracy? Materials and Methods: Histology, paraffin block face and magnetic resonance images were acquired for 18 whole mid-gland tissue slices from six prostates. 7-15 homologous landmarks were identified on each image. Tissue deformation due to histology processing was characterized using the target registration error (TRE) after landmark-based registration under four deformation models (rigid, similarity, affine and thin-plate-spline [TPS]). The misalignment of histology sections from the front faces of tissue slices was quantified using manually identified landmarks. The impact of reconstruction models on the TRE after landmark-based reconstruction was measured under eight reconstruction models comprising one of four deformation models with and without constraining histology images to the tissue slice front faces. Results: Isotropic scaling improved the mean TRE by 0.8-1.0 mm (all results reported as 95% confidence intervals), while skew or TPS deformation improved the mean TRE by <0.1 mm. The mean misalignment was 1.1-1.9° (angle) and 0.9-1.3 mm (depth). Using isotropic scaling, the front face constraint raised the mean TRE by 0.6-0.8 mm. Conclusions: For sub-millimeter accuracy, 3D reconstruction models should not constrain histology images to the tissue slice front faces and should be
Application of Super-Resolution Image Reconstruction to Digital Holography
Zhang Shuqun
2006-01-01
Full Text Available We describe a new application of super-resolution image reconstruction to digital holography which is a technique for three-dimensional information recording and reconstruction. Digital holography has suffered from the low resolution of CCD sensors, which significantly limits the size of objects that can be recorded. The existing solution to this problem is to use optics to bandlimit the object to be recorded, which can cause the loss of details. Here super-resolution image reconstruction is proposed to be applied in enhancing the spatial resolution of digital holograms. By introducing a global camera translation before sampling, a high-resolution hologram can be reconstructed from a set of undersampled hologram images. This permits the recording of larger objects and reduces the distance between the object and the hologram. Practical results from real and simulated holograms are presented to demonstrate the feasibility of the proposed technique.
Compensation for air voids in photoacoustic computed tomography image reconstruction
Matthews, Thomas P.; Li, Lei; Wang, Lihong V.; Anastasio, Mark A.
2016-03-01
Most image reconstruction methods in photoacoustic computed tomography (PACT) assume that the acoustic properties of the object and the surrounding medium are homogeneous. This can lead to strong artifacts in the reconstructed images when there are significant variations in sound speed or density. Air voids represent a particular challenge due to the severity of the differences between the acoustic properties of air and water. In whole-body small animal imaging, the presence of air voids in the lungs, stomach, and gastrointestinal system can limit image quality over large regions of the object. Iterative reconstruction methods based on the photoacoustic wave equation can account for these acoustic variations, leading to improved resolution, improved contrast, and a reduction in the number of imaging artifacts. However, the strong acoustic heterogeneities can lead to instability or errors in the numerical wave solver. Here, the impact of air voids on PACT image reconstruction is investigated, and procedures for their compensation are proposed. The contributions of sound speed and density variations to the numerical stability of the wave solver are considered, and a novel approach for mitigating the impact of air voids while reducing the computational burden of image reconstruction is identified. These results are verified by application to an experimental phantom.
Iterative image reconstruction and its role in cardiothoracic computed tomography.
Singh, Sarabjeet; Khawaja, Ranish Deedar Ali; Pourjabbar, Sarvenaz; Padole, Atul; Lira, Diego; Kalra, Mannudeep K
2013-11-01
Revolutionary developments in multidetector-row computed tomography (CT) scanner technology offer several advantages for imaging of cardiothoracic disorders. As a result, expanding applications of CT now account for >85 million CT examinations annually in the United States alone. Given the large number of CT examinations performed, concerns over increase in population-based risk for radiation-induced carcinogenesis have made CT radiation dose a top safety concern in health care. In response to this concern, several technologies have been developed to reduce the dose with more efficient use of scan parameters and the use of "newer" image reconstruction techniques. Although iterative image reconstruction algorithms were first introduced in the 1970s, filtered back projection was chosen as the conventional image reconstruction technique because of its simplicity and faster reconstruction times. With subsequent advances in computational speed and power, iterative reconstruction techniques have reemerged and have shown the potential of radiation dose optimization without adversely influencing diagnostic image quality. In this article, we review the basic principles of different iterative reconstruction algorithms and their implementation for various clinical applications in cardiothoracic CT examinations for reducing radiation dose.
Efficient and Accurate Gaussian Image Filtering Using Running Sums
Elboher, Elhanan
2011-01-01
This paper presents a simple and efficient method to convolve an image with a Gaussian kernel. The computation is performed in a constant number of operations per pixel using running sums along the image rows and columns. We investigate the error function used for kernel approximation and its relation to the properties of the input signal. Based on natural image statistics we propose a quadratic form kernel error function so that the output image l2 error is minimized. We apply the proposed approach to approximate the Gaussian kernel by linear combination of constant functions. This results in very efficient Gaussian filtering method. Our experiments show that the proposed technique is faster than state of the art methods while preserving a similar accuracy.
Chris L. de Korte
2013-03-01
Full Text Available Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2–3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding.
Improving JWST Coronagraphic Performance with Accurate Image Registration
Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group
2016-06-01
The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.
Visions of Reconstruction: Layers of Moving Images
Floris Jan Willem Paalman
2015-12-01
Full Text Available After WWII, films accompanied the reconstruction of Europe’s destroyed cities. Many contained historical footage. How was this material used, to articulate visions of reconstruction, what happened to the material later on, and how do the films relate to municipal film archives? This question is approached in terms of collective cognitive functions, applied to a media archaeological case study of Rotterdam. In focus are two audiovisual landmarks, from 1950 and 1966, and their historical footage, all with different temporal horizons. This study attempts to position the city film archive in media history.
Proposal of fault-tolerant tomographic image reconstruction
Kudo, Hiroyuki; Yamazaki, Fukashi; Nemoto, Takuya
2016-01-01
This paper deals with tomographic image reconstruction under the situation where some of projection data bins are contaminated with abnormal data. Such situations occur in various instances of tomography. We propose a new reconstruction algorithm called the Fault-Tolerant reconstruction outlined as follows. The least-squares (L2-norm) error function ||Ax-b||_2^2 used in ordinary iterative reconstructions is sensitive to the existence of abnormal data. The proposed algorithm utilizes the L1-norm error function ||Ax-b||_1^1 instead of the L2-norm, and we develop a row-action-type iterative algorithm using the proximal splitting framework in convex optimization fields. We also propose an improved version of the L1-norm reconstruction called the L1-TV reconstruction, in which a weak Total Variation (TV) penalty is added to the cost function. Simulation results demonstrate that reconstructed images with the L2-norm were severely damaged by the effect of abnormal bins, whereas images with the L1-norm and L1-TV reco...
Accurate measurement of curvilinear shapes by Virtual Image Correlation
Semin, B.; Auradou, H.; François, M. L. M.
2011-10-01
The proposed method allows the detection and the measurement, in the sense of metrology, of smooth elongated curvilinear shapes. Such measurements are required in many fields of physics, for example: mechanical engineering, biology or medicine (deflection of beams, fibers or filaments), fluid mechanics or chemistry (detection of fronts). Contrary to actual methods, the result is given in an analytical form of class C∞ (and not a finite set of locations or pixels) thus curvatures and slopes, often of great interest in science, are given with good confidence. The proposed Virtual Image Correlation (VIC) method uses a virtual beam, an image which consists in a lateral expansion of the curve with a bell-shaped gray level. This figure is deformed until it fits the best the physical image with a method issued from the Digital Image Correlation method in use in solid mechanics. The precision of the identification is studied in a benchmark and successfully compared to two state-of-the-art methods. Three practical examples are given: a bar bending under its own weight, a thin fiber transported by a flow within a fracture and a thermal front. The first allows a comparison with theoretical solution, the second shows the ability of the method to deal with complex shapes and crossings and the third deals with ill-defined image.
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more
Super-Resolution Reconstruction of Image Sequence Using Multiple Motion Estimation Fusion
Cheng Wang; Run-Sheng Wang
2004-01-01
Super-resolution reconstruction algorithm produces a high-resolution image from a low-resolution image sequence. The accuracy and the stability of the motion estimation (ME) are essential for the whole restoration. In this paper, a new super-resolution reconstruction algorithm is developed using a robust ME method, which fuses multiple estimated motion vectors within the sequence. The new algorithm has two major improvements compared with the previous research. First, instead of only two frames, the whole sequence is used to obtain a more accurate and stable estimation of the motion vector of each frame; second, the reliability of the ME is quantitatively measured and introduced into the cost function of the reconstruction algorithm. The algorithm is applied to both synthetic and real sequences, and the results are presented in the paper.
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.
Pose Reconstruction of Flexible Instruments from Endoscopic Images using Markers
Reilink, Rob; Stramigioli, Stefano; Misra, Sarthak
2012-01-01
A system is developed that can reconstruct the pose of flexible endoscopic instruments that are used in ad- vanced flexible endoscopes using solely the endoscopic images. Four markers are placed on the instrument, whose positions are measured in the image. These measurements are compared to a
Distributed image reconstruction for very large arrays in radio astronomy
Ferrari, André; Flamary, Rémi; Richard, Cédric
2015-01-01
Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.
3D Image Reconstruction: Determination of Pattern Orientation
Blankenbecler, Richard
2003-03-13
The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.
Analysis Operator Learning and Its Application to Image Reconstruction
Hawe, Simon; Diepold, Klaus
2012-01-01
Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constrai...
Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
Milani, Simone; Tronca, Enrico
2017-01-01
During the past years, research has focused on the reconstruction of three-dimensional point cloud models from unordered and uncalibrated sets of images. Most of the proposed solutions rely on the structure-from-motion algorithm, and their performances significantly degrade whenever exchangeable image file format information about focal lengths is missing or corrupted. We propose a preprocessing strategy that permits estimating the focal lengths of a camera more accurately. The basic idea is to cluster the input images into separate subsets according to an array of interpolation-related multimedia forensic clues. This operation permits having a more robust estimate and improving the accuracy of the final model.
Zhang Wentao; Li Xiaofeng; Li Zaiming
2001-01-01
The paper first discusses shortcomings of classical adjacent-frame difference. Sec ondly, based on the image energy and high order statistic(HOS) theory, background reconstruction constraints are setup. Under the help of block-processing technology, background is reconstructed quickly. Finally, background difference is used to detect motion regions instead of adjacent frame difference. The DSP based platform tests indicate the background can be recovered losslessly in about one second, and moving regions are not influenced by moving target speeds. The algorithm has important usage both in theory and applications.
Sparsity-constrained three-dimensional image reconstruction for C-arm angiography.
Rashed, Essam A; al-Shatouri, Mohammad; Kudo, Hiroyuki
2015-07-01
X-ray C-arm is an important imaging tool in interventional radiology, road-mapping and radiation therapy because it provides accurate descriptions of vascular anatomy and therapeutic end point. In common interventional radiology, the C-arm scanner produces a set of two-dimensional (2D) X-ray projection data obtained with a detector by rotating the scanner gantry around the patient. Unlike conventional fluoroscopic imaging, three-dimensional (3D) C-arm computed tomography (CT) provides more accurate cross-sectional images, which are helpful for therapy planning, guidance and evaluation in interventional radiology. However, 3D vascular imaging using the conventional C-arm fluoroscopy encounters some geometry challenges. Inspired by the theory of compressed sensing, we developed an image reconstruction algorithm for conventional angiography C-arm scanners. The main challenge in this image reconstruction problem is the projection data limitations. We consider a small number of views acquired from a short rotation orbit with offset scan geometry. The proposed method, called sparsity-constrained angiography (SCAN), is developed using the alternating direction method of multipliers, and the results obtained from simulated and real data are encouraging. SCAN algorithm provides a framework to generate 3D vascular images using the conventional C-arm scanners in lower cost than conventional 3D imaging scanners.
Distributed multi-frequency image reconstruction for radio-interferometry
Deguignet, Jérémy; Mary, David; Ferrari, Chiara
2016-01-01
The advent of enhanced technologies in radio interferometry and the perspective of the SKA telescope bring new challenges in image reconstruction. One of these challenges is the spatio-spectral reconstruction of large (Terabytes) data cubes with high fidelity. This contribution proposes an alternative implementation of one such 3D prototype algorithm, MUFFIN (MUlti-Frequency image reconstruction For radio INterferometry), which combines spatial and spectral analysis priors. Using a recently proposed primal dual algorithm, this new version of MUFFIN allows a parallel implementation where computationally intensive steps are split by spectral channels. This parallelization allows to implement computationally demanding translation invariant wavelet transforms (IUWT), as opposed to the union of bases used previously. This alternative implementation is important as it opens the possibility of comparing these efficient dictionaries, and others, in spatio-spectral reconstruction. Numerical results show that the IUWT-...
Probability of correct reconstruction in compressive spectral imaging
Samuel Eduardo Pinilla
2016-08-01
Full Text Available Coded Aperture Snapshot Spectral Imaging (CASSI systems capture the 3-dimensional (3D spatio-spectral information of a scene using a set of 2-dimensional (2D random coded Focal Plane Array (FPA measurements. A compressed sensing reconstruction algorithm is then used to recover the underlying spatio-spectral 3D data cube. The quality of the reconstructed spectral images depends exclusively on the CASSI sensing matrix, which is determined by the statistical structure of the coded apertures. The Restricted Isometry Property (RIP of the CASSI sensing matrix is used to determine the probability of correct image reconstruction and provides guidelines for the minimum number of FPA measurement shots needed for image reconstruction. Further, the RIP can be used to determine the optimal structure of the coded projections in CASSI. This article describes the CASSI optical architecture and develops the RIP for the sensing matrix in this system. Simulations show the higher quality of spectral image reconstructions when the RIP property is satisfied. Simulations also illustrate the higher performance of the optimal structured projections in CASSI.
Tomographic Image Reconstruction Using Training Images with Matrix and Tensor Formulations
Soltani, Sara
the image resolution compared to a classical reconstruction method such as Filtered Back Projection (FBP). Some priors for the tomographic reconstruction take the form of cross-section images of similar objects, providing a set of the so-called training images, that hold the key to the structural...... information about the solution. The training images must be reliable and application-specific. This PhD project aims at providing a mathematical and computational framework for the use of training sets as non-parametric priors for the solution in tomographic image reconstruction. Through an unsupervised...... machine learning technique (here, the dictionary learning), prototype elements from the training images are extracted and then incorporated in the tomographic reconstruction problem both with matrix and tensor representations of the training images. First, an algorithm for the tomographic image...
Parallel hyperspectral image reconstruction using random projections
Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.
2016-10-01
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
High resolution image reconstruction with constrained, total-variation minimization
Sidky, Emil Y; Duchin, Yuval; Ullberg, Christer; Pan, Xiaochuan
2011-01-01
This work is concerned with applying iterative image reconstruction, based on constrained total-variation minimization, to low-intensity X-ray CT systems that have a high sampling rate. Such systems pose a challenge for iterative image reconstruction, because a very fine image grid is needed to realize the resolution inherent in such scanners. These image arrays lead to under-determined imaging models whose inversion is unstable and can result in undesirable artifacts and noise patterns. There are many possibilities to stabilize the imaging model, and this work proposes a method which may have an advantage in terms of algorithm efficiency. The proposed method introduces additional constraints in the optimization problem; these constraints set to zero high spatial frequency components which are beyond the sensing capability of the detector. The method is demonstrated with an actual CT data set and compared with another method based on projection up-sampling.
Reconstructing flaw image using dataset of full matrix capture technique
Lee, Tae Hun; Kim, Yong Sik; Lee, Jeong Seok [KHNP Central Research Institute, Daejeon (Korea, Republic of)
2017-02-15
A conventional phased array ultrasonic system offers the ability to steer an ultrasonic beam by applying independent time delays of individual elements in the array and produce an ultrasonic image. In contrast, full matrix capture (FMC) is a data acquisition process that collects a complete matrix of A-scans from every possible independent transmit-receive combination in a phased array transducer and makes it possible to reconstruct various images that cannot be produced by conventional phased array with the post processing as well as images equivalent to a conventional phased array image. In this paper, a basic algorithm based on the LLL mode total focusing method (TFM) that can image crack type flaws is described. And this technique was applied to reconstruct flaw images from the FMC dataset obtained from the experiments and ultrasonic simulation.
A FAST CONVERGING SPARSE RECONSTRUCTION ALGORITHM IN GHOST IMAGING
Li Enrong; Chen Mingliang; Gong Wenlin; Wang Hui; Han Shensheng
2012-01-01
A fast converging sparse reconstruction algorithm in ghost imaging is presented.It utilizes total variation regularization and its formulation is based on the Karush-Kuhn-Tucker (KKT) theorem in the theory of convex optimization.Tests using experimental data show that,compared with the algorithm of Gradient Projection for Sparse Reconstruction (GPSR),the proposed algorithm yields better results with less computation work.
Shape-based image reconstruction using linearized deformations
Öktem, Ozan; Chen, Chong; Onur Domaniç, Nevzat; Ravikumar, Pradeep; Bajaj, Chandrajit
2017-03-01
We introduce a reconstruction framework that can account for shape related prior information in imaging-related inverse problems. It is a variational scheme that uses a shape functional, whose definition is based on deformable template machinery from computational anatomy. We prove existence and, as a proof of concept, we apply the proposed shape-based reconstruction to 2D tomography with very sparse and/or highly noisy measurements.
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong, E-mail: jingyong.ye@utsa.edu [Department of Biomedical Engineering, University of Texas at San Antonio, San Antonio, Texas 78249 (United States)
2015-05-15
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
Jones, Jasmine; Zhang, Rui; Heins, David; Castle, Katherine
In postmastectomy radiotherapy, an increasing number of patients have tissue expanders inserted subpectorally when receiving immediate breast reconstruction. These tissue expanders are composed of silicone and are inflated with saline through an internal metallic port; this serves the purpose of stretching the muscle and skin tissue over time, in order to house a permanent implant. The issue with administering radiation therapy in the presence of a tissue expander is that the port's magnetic core can potentially perturb the dose delivered to the Planning Target Volume, causing significant artifacts in CT images. Several studies have explored this problem, and suggest that density corrections must be accounted for in treatment planning. However, very few studies accurately calibrated commercial TP systems for the high density material used in the port, and no studies employed fusion imaging to yield a more accurate contour of the port in treatment planning. We compared depth dose values in the water phantom between measurement and TPS calculations, and we were able to overcome some of the inhomogeneities presented by the image artifact by fusing the KVCT and MVCT images of the tissue expander together, resulting in a more precise comparison of dose calculations at discrete locations. We expect this method to be pivotal in the quantification of dose distribution in the PTV. Research funded by the LS-AMP Award.
Task-based data-acquisition optimization for sparse image reconstruction systems
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Image reconstruction technique using projection data from neutron tomography system
Waleed Abd el Bar
2015-12-01
Full Text Available Neutron tomography is a very powerful technique for nondestructive evaluation of heavy industrial components as well as for soft hydrogenous materials enclosed in heavy metals which are usually difficult to image using X-rays. Due to the properties of the image acquisition system, the projection images are distorted by several artifacts, and these reduce the quality of the reconstruction. In order to eliminate these harmful effects the projection images should be corrected before reconstruction. This paper gives a description of a filter back projection (FBP technique, which is used for reconstruction of projected data obtained from transmission measurements by neutron tomography system We demonstrated the use of spatial Discrete Fourier Transform (DFT and the 2D Inverse DFT in the formulation of the method, and outlined the theory of reconstruction of a 2D neutron image from a sequence of 1D projections taken at different angles between 0 and π in MATLAB environment. Projections are generated by applying the Radon transform to the original image at different angles.
Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU
Arefan D
2015-06-01
Full Text Available Digital Breast Tomosynthesis (DBT is a technology that creates three dimensional (3D images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU. At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU card and the Graphics Processing Unit (GPU. It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU.
Reconstruction of CT images by the Bayes- back projection method
Haruyama, M; Takase, M; Tobita, H
2002-01-01
In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...
Sparse representation for the ISAR image reconstruction
Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.
2016-05-01
In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.
Super-resolution reconstruction of hyperspectral images
Elbakary, Mohamed; Alam, Mohammad S.
2007-04-01
Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that are not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images [7]. In this paper, we propose a new method to produce high resolution bands from low resolution bands that are strongly correlated to the corresponding high resolution panchromatic image. The proposed method is based on using the local correlation instead of using the global correlation to improve the estimated interpolation in order to construct the high resolution image. The utilization of local correlation significantly improved the resolution of high resolution images when compared to the corresponding results obtained using the traditional algorithms. The local correlation is implemented by using predefined small windows across the low resolution image. In addition, numerous experiments are conducted to investigate the effect of the chosen window size in the image quality. Experiments results obtained using real life hyperspectral imagery is presented to verify the effectiveness of the proposed algorithm.
Block Compressed Sensing of Images Using Adaptive Granular Reconstruction
Ran Li
2016-01-01
Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.
Flame Reconstruction Using Synthetic Aperture Imaging
Murray, Preston; Tree, Dale; Truscott, Tadd
2011-01-01
Flames can be formed by burning methane (CH4). When oxygen is scarce, carbon particles nucleate into solid particles called soot. These particles emit photons, making the flame yellow. Later, methane is pre-mixed with air forming a blue flame; burning more efficiently, providing less soot and light. Imaging flames and knowing their temperature are vital to maximizing efficiency and validating numerical models. Most temperature probes disrupt the flame and create differences leading to an inaccurate measurement of the flame temperature. We seek to image the flame in three dimensions using synthetic aperture imaging. This technique has already successfully measured velocity fields of a vortex ring [1]. Synthetic aperture imaging is a technique that views one scene from multiple cameras set at different angles, allowing some cameras to view objects that are obscured by others. As the resulting images are overlapped different depths of the scene come into and out of focus, known as focal planes, similar to tomogr...
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
Murase, Kenya
2016-01-01
The purpose of this study was to present image reconstruction methods for magnetic particle imaging (MPI) with a field-free-line (FFL) encoding scheme and to propose the use of the maximum likelihood-expectation maximization (ML-EM) algorithm for improving the image quality of MPI. The feasibility of these methods was investigated by computer simulations, in which the projection data were generated by summing up the Fourier harmonics obtained from the MPI signals based on the Langevin function. Images were reconstructed from the generated projection data using the filtered backprojection (FBP) method and the ML-EM algorithm. The effects of the gradient of selection magnetic field (SMF), the strength of drive magnetic field (DMF), the diameter of magnetic nanoparticles (MNPs), and the number of projection data on the image quality of the reconstructed images were investigated. The spatial resolution of the reconstructed images became better with increasing gradient of SMF and with increasing diameter of MNPs u...
Dong, Yuefu; Mou, Zhifang; Huang, Zhenyu; Hu, Guanghong; Dong, Yinghai; Xu, Qingrong
2013-10-01
Three-dimensional reconstruction of human body from a living subject can be considered as the first step toward promoting virtual human project as a tool in clinical applications. This study proposes a detailed protocol for building subject-specific three-dimensional model of knee joint from a living subject. The computed tomography and magnetic resonance imaging image data of knee joint were used to reconstruct knee structures, including bones, skin, muscles, cartilages, menisci, and ligaments. They were fused to assemble the complete three-dimensional knee joint. The procedure was repeated three times with respect to three different methods of reference landmarks. The accuracy of image fusion in accordance with different landmarks was evaluated and compared with each other. The complete three-dimensional knee joint, which included 21 knee structures, was accurately developed. The choice of external or anatomical landmarks was not crucial to improve image fusion accuracy for three-dimensional reconstruction. Further work needs to be done to explore the value of the reconstructed three-dimensional knee joint for its biomechanics and kinematics.
Hofmann, Christian [Institute of Medical Physics, Friedrich-Alexander University (FAU), Erlangen 91052 (Germany); Sawall, Stefan; Knaup, Michael [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg 69120 (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz-heidelberg [Institute of Medical Physics, Friedrich-Alexander University (FAU), Erlangen 91052, Germany and Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg 69120 (Germany)
2014-06-15
Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast
Three dimensional reconstruction of conventional stereo optic disc image.
Kong, H J; Kim, S K; Seo, J M; Park, K H; Chung, H; Park, K S; Kim, H C
2004-01-01
Stereo disc photograph was analyzed and reconstructed as 3 dimensional contour image to evaluate the status of the optic nerve head for the early detection of glaucoma and the evaluation of the efficacy of treatment. Stepwise preprocessing was introduced to detect the edge of the optic nerve head and retinal vessels and reduce noises. Paired images were registered by power cepstrum method and zero-mean normalized cross-correlation. After Gaussian blurring, median filter application and disparity pair searching, depth information in the 3 dimensionally reconstructed image was calculated by the simple triangulation formula. Calculated depth maps were smoothed through cubic B-spline interpolation and retinal vessels were visualized more clearly by adding reference image. Resulted 3 dimensional contour image showed optic cups, retinal vessels and the notching of the neural rim of the optic disc clearly and intuitively, helping physicians in understanding and interpreting the stereo disc photograph.
AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES
F. Alidoost
2015-12-01
Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.
Local fingerprint image reconstruction based on gabor filtering
Bakhtiari, Somayeh; Agaian, Sos S.; Jamshidi, Mo
2012-06-01
In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual information of the image to be preserved while connecting discontinuities correctly by approximating the orientation matrix more genuinely.
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images.
Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu [Department of Electrical and Computer Engineering, Southern Illinois University Carbondale, Carbondale, Illinois 62901 (United States); Lu, Jianping; Zhou, Otto [Department of Physics and Astronomy and Curriculum in Applied Sciences and Engineering, University of North Carolina Chapel Hill, Chapel Hill, North Carolina 27599 (United States)
2015-09-15
Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair based prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally
Magnetic resonance imaging with nonlinear gradient fields signal encoding and image reconstruction
Schultz, Gerrit
2013-01-01
Within the past few decades magnetic resonance imaging has become one of the most important imaging modalities in medicine. For a reliable diagnosis of pathologies further technological improvements are of primary importance. This text deals with a radically new approach of image encoding: The fundamental principle of gradient linearity is challenged by investigating the possibilities of acquiring anatomical images with the help of nonlinear gradient fields. Besides a thorough theoretical analysis with a focus on signal encoding and image reconstruction, initial hardware implementations are tested using phantom as well as in-vivo measurements. Several applications are presented that give an impression about the implications that this technological advancement may have for future medical diagnostics. Contents n Image Reconstruction in MRI n Nonlinear Gradient Encoding: PatLoc Imaging n Presentation of Initial Hardware Designs n Basics of Signal Encoding and Image Reconstruction in PatLoc Imaging n ...
Parallel Image Reconstruction for New Vacuum Solar Telescope
Li, Xue-Bao; Wang, Feng; Xiang, Yong Yuan; Zheng, Yan Fang; Liu, Ying Bo; Deng, Hui; Ji, Kai Fan
2014-04-01
Many advanced ground-based solar telescopes improve the spatial resolution of observation images using an adaptive optics (AO) system. As any AO correction remains only partial, it is necessary to use post-processing image reconstruction techniques such as speckle masking or shift-and-add (SAA) to reconstruct a high-spatial-resolution image from atmospherically degraded solar images. In the New Vacuum Solar Telescope (NVST), the spatial resolution in solar images is improved by frame selection and SAA. In order to overcome the burden of massive speckle data processing, we investigate the possibility of using the speckle reconstruction program in a real-time application at the telescope site. The code has been written in the C programming language and optimized for parallel processing in a multi-processor environment. We analyze the scalability of the code to identify possible bottlenecks, and we conclude that the presented code is capable of being run in real-time reconstruction applications at NVST and future large aperture solar telescopes if care is taken that the multi-processor environment has low latencies between the computation nodes.
Gadgetron: an open source framework for medical image reconstruction.
Hansen, Michael Schacht; Sørensen, Thomas Sangild
2013-06-01
This work presents a new open source framework for medical image reconstruction called the "Gadgetron." The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or "Gadgets" from raw data to reconstructed images. The data processing pipeline is configured dynamically at run-time based on an extensible markup language configuration description. The framework promotes reuse and sharing of reconstruction modules and new Gadgets can be added to the Gadgetron framework through a plugin-like architecture without recompiling the basic framework infrastructure. Gadgets are typically implemented in C/C++, but the framework includes wrapper Gadgets that allow the user to implement new modules in the Python scripting language for rapid prototyping. In addition to the streaming framework infrastructure, the Gadgetron comes with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its application to Cartesian and non-Cartesian parallel magnetic resonance imaging.
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method.
Ukwatta, Eranga, E-mail: eukwatt1@jhu.edu; Arevalo, Hermenegild; Pashakhanloo, Farhad; Prakosa, Adityo; Vadakkumpadan, Fijoy [Institute for Computational Medicine, Johns Hopkins University, Baltimore, Maryland 21205 and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Rajchl, Martin [Department of Computing, Imperial College London, London SW7 2AZ (United Kingdom); White, James [Stephenson Cardiovascular MR Centre, University of Calgary, Calgary, Alberta T2N 2T9 (Canada); Herzka, Daniel A.; McVeigh, Elliot [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Lardo, Albert C. [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 and Division of Cardiology, Johns Hopkins Institute of Medicine, Baltimore, Maryland 21224 (United States); Trayanova, Natalia A. [Institute for Computational Medicine, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Department of Biomedical Engineering, Johns Hopkins Institute of Medicine, Baltimore, Maryland 21205 (United States)
2015-08-15
Purpose: Accurate three-dimensional (3D) reconstruction of myocardial infarct geometry is crucial to patient-specific modeling of the heart aimed at providing therapeutic guidance in ischemic cardiomyopathy. However, myocardial infarct imaging is clinically performed using two-dimensional (2D) late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) techniques, and a method to build accurate 3D infarct reconstructions from the 2D LGE-CMR images has been lacking. The purpose of this study was to address this need. Methods: The authors developed a novel methodology to reconstruct 3D infarct geometry from segmented low-resolution (Lo-res) clinical LGE-CMR images. Their methodology employed the so-called logarithm of odds (LogOdds) function to implicitly represent the shape of the infarct in segmented image slices as LogOdds maps. These 2D maps were then interpolated into a 3D image, and the result transformed via the inverse of LogOdds to a binary image representing the 3D infarct geometry. To assess the efficacy of this method, the authors utilized 39 high-resolution (Hi-res) LGE-CMR images, including 36 in vivo acquisitions of human subjects with prior myocardial infarction and 3 ex vivo scans of canine hearts following coronary ligation to induce infarction. The infarct was manually segmented by trained experts in each slice of the Hi-res images, and the segmented data were downsampled to typical clinical resolution. The proposed method was then used to reconstruct 3D infarct geometry from the downsampled images, and the resulting reconstructions were compared with the manually segmented data. The method was extensively evaluated using metrics based on geometry as well as results of electrophysiological simulations of cardiac sinus rhythm and ventricular tachycardia in individual hearts. Several alternative reconstruction techniques were also implemented and compared with the proposed method. Results: The accuracy of the LogOdds method in reconstructing 3D
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Generalized Fourier slice theorem for cone-beam image reconstruction.
Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang
2015-01-01
The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
Principles of MR image formation and reconstruction.
Duerk, J L
1999-11-01
This article describes a number of concepts that provide insights into the process of MR imaging. The use of shaped, fixed-bandwidth RF pulses and magnetic field gradients is described to provide an understanding of the methods used for slice selection. Variations in the slice-excitation profile are shown as a function of the RF pulse shape used, the truncation method used, and the tip angle. It should be remembered that although the goal is to obtain uniform excitation across the slice, this goal is never achieved in practice, thus necessitating the use of slice gaps in some cases. Excitation, refocusing, and inversion pulses are described. Excitation pulses nutate the spins from the longitudinal axis into the transverse plane, where their magnetization can be detected. Refocusing pulses are used to flip the magnetization through 180 degrees once it is in the transverse plane, so that the influence of magnetic field inhomogeneities is eliminated. Inversion pulses are used to flip the magnetization from the +z to the -z direction in invesrsion-recovery sequences. Radiofrequency pulses can also be used to eliminate either fat or water protons from the images because of the small differences in resonant frequency between these two types of protons. Selective methods based on chemical shift and binomial methods are described. Once the desired magnetization has been tipped into the transverse plane by the slice-selection process, two imaging axes remain to be spatially encoded. One axis is easily encoded by the application of a second magnetic field gradient that establishes a one-to-one mapping between position and frequency during the time that the signal is converted from analog to digital sampling. This frequency-encoding gradient is used in combination with the Fourier transform to determine the location of the precessing magnetization. The second image axis is encoded by a process known as phase encoding. The collected data can be described as the 2D Fourier
Efficient iterative image reconstruction algorithm for dedicated breast CT
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
Whole Mouse Brain Image Reconstruction from Serial Coronal Sections Using FIJI (ImageJ).
Paletzki, Ronald; Gerfen, Charles R
2015-10-01
Whole-brain reconstruction of the mouse enables comprehensive analysis of the distribution of neurochemical markers, the distribution of anterogradely labeled axonal projections or retrogradely labeled neurons projecting to a specific brain site, or the distribution of neurons displaying activity-related markers in behavioral paradigms. This unit describes a method to produce whole-brain reconstruction image sets from coronal brain sections with up to four fluorescent markers using the freely available image-processing program FIJI (ImageJ).
Photoacoustic image reconstruction: material detection and acoustical heterogeneities
Schoeder, S.; Kronbichler, M.; Wall, W. A.
2017-05-01
The correct consideration of acoustical heterogeneities in the context of photoacoustic image reconstruction is an open topic. In this publication a physically motivated algorithm is proposed that reconstructs the optical absorption and diffusion coefficients using a gradient-based scheme. The simultaneous reconstruction of both material properties allows for a subsequent material identification and an accordant update of the acoustical material properties. The algorithm is general in terms of illumination scenarios, detection geometries and applications. No prior knowledge on material distributions needs to be provided, only expected materials have to be specified. Numerical experiments are performed to gain insight into the complex inverse problem and to validate the proposed method. Results show that acoustical heterogeneities are correctly detected improving the optical images.
Joint Image Reconstruction and Segmentation Using the Potts Model
Storath, Martin; Frikel, Jürgen; Unser, Michael
2014-01-01
We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford-Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation from limited data in x-ray and photoacoustic tomography. For instance, our method is able to reconstruct the Shepp-Logan phantom from $7$ angular views only. We demonstrate the practical applicability in an experiment with real PET data.
Canning plasmonic microscopy by image reconstruction from the Fourier space
Mollet, O; Drezet, A
2014-01-01
We demonstrate a simple scheme for high-resolution imaging of nanoplasmonic structures that basically removes most of the resolution limiting allowed light usually transmitted to the far field. This is achieved by implementing a Fourier lens in a near-field scanning optical microscope (NSOM) operating in the leakage-radiation microscopy (LRM) mode. The method consists of reconstructing optical images solely from the plasmonic `forbidden' light collected in the Fourier space. It is demonstrated by using a point-like nanodiamond-based tip that illuminates a thin gold film patterned with a sub-wavelength annular slit. The reconstructed image of the slit shows a spatial resolution enhanced by a factor $\\simeq 4$ compared to NSOM images acquired directly in the real space.
Progress Update on Iterative Reconstruction of Neutron Tomographic Images
Hausladen, Paul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gregor, Jens [Univ. of Tennessee, Knoxville, TN (United States)
2016-09-15
This report satisfies the fiscal year 2016 technical deliverable to report on progress in development of fast iterative reconstruction algorithms for project OR16-3DTomography-PD2Jb, "3D Tomography and Image Processing Using Fast Neutrons." This project has two overall goals. The first of these goals is to extend associated-particle fast neutron transmission and, particularly, induced-reaction tomographic imaging algorithms to three dimensions. The second of these goals is to automatically segment the resultant tomographic images into constituent parts, and then extract information about the parts, such as the class of shape and potentially shape parameters. This report addresses of the component of the project concerned with three-dimensional (3D) image reconstruction.
Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es
2010-04-07
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Single Image Super Resolution via Sparse Reconstruction
Kruithof, M.C.; Eekeren, A.W.M. van; Dijk, J.; Schutte, K.
2012-01-01
High resolution sensors are required for recognition purposes. Low resolution sensors, however, are still widely used. Software can be used to increase the resolution of such sensors. One way of increasing the resolution of the images produced is using multi-frame super resolution algorithms. Limita
Computationally efficient algorithm for multifocus image reconstruction
Eltoukhy, Helmy A.; Kavusi, Sam
2003-05-01
A method for synthesizing enhanced depth of field digital still camera pictures using multiple differently focused images is presented. This technique exploits only spatial image gradients in the initial decision process. The image gradient as a focus measure has been shown to be experimentally valid and theoretically sound under weak assumptions with respect to unimodality and monotonicity. Subsequent majority filtering corroborates decisions with those of neighboring pixels, while the use of soft decisions enables smooth transitions across region boundaries. Furthermore, these last two steps add algorithmic robustness for coping with both sensor noise and optics-related effects, such as misregistration or optical flow, and minor intensity fluctuations. The dependence of these optical effects on several optical parameters is analyzed and potential remedies that can allay their impact with regard to the technique's limitations are discussed. Several examples of image synthesis using the algorithm are presented. Finally, leveraging the increasing functionality and emerging processing capabilities of digital still cameras, the method is shown to entail modest hardware requirements and is implementable using a parallel or general purpose processor.
Image Reconstruction Using Pixel Wise Support Vector Machine SVM Classification.
Mohammad Mahmudul Alam Mia
2015-02-01
Full Text Available Abstract Image reconstruction using support vector machine SVM has been one of the major parts of image processing. The exactness of a supervised image classification is a function of the training data used in its generation. In this paper we studied support vector machine for classification aspects and reconstructed an image using support vector machine. Firstly value of the random pixels is used as the SVM classifier. Then the SVM classifier is trained by using those values of the random pixels. Finally the image is reconstructed after cross-validation with the trained SVM classifier. Matlab result shows that training with support vector machine produce better results and great computational efficiency with only a few minutes of runtime is necessary for training. Support vector machine have high classification accuracy and much faster convergence. Overall classification accuracy is 99.5. From our experiment It can be seen that classification accuracy mostly depends on the choice of the kernel function and best estimation of parameters for kernel is critical for a given image.
Undersampled Hyperspectral Image Reconstruction Based on Surfacelet Transform
Lei Liu
2015-01-01
Full Text Available Hyperspectral imaging is a crucial technique for military and environmental monitoring. However, limited equipment hardware resources severely affect the transmission and storage of a huge amount of data for hyperspectral images. This limitation has the potentials to be solved by compressive sensing (CS, which allows reconstructing images from undersampled measurements with low error. Sparsity and incoherence are two essential requirements for CS. In this paper, we introduce surfacelet, a directional multiresolution transform for 3D data, to sparsify the hyperspectral images. Besides, a Gram-Schmidt orthogonalization is used in CS random encoding matrix, two-dimensional and three-dimensional orthogonal CS random encoding matrixes and a patch-based CS encoding scheme are designed. The proposed surfacelet-based hyperspectral images reconstruction problem is solved by a fast iterative shrinkage-thresholding algorithm. Experiments demonstrate that reconstruction of spectral lines and spatial images is significantly improved using the proposed method than using conventional three-dimensional wavelets, and growing randomness of encoding matrix can further improve the quality of hyperspectral data. Patch-based CS encoding strategy can be used to deal with large data because data in different patches can be independently sampled.
Phase Closure Image Reconstruction for Future VLTI Instrumentation
Filho, Mercedes E; Garcia, Paulo; Duvert, Gilles; Duchene, Gaspard; Thiebaut, Eric; Young, John; Absil, Olivier; Berger, Jean-Phillipe; Beckert, Thomas; Hoenig, Sebastian; Schertl, Dieter; Weigelt, Gerd; Testi, Leonardo; Tatuli, Eric; Borkowski, Virginie; de Becker, Michael; Surdej, Jean; Aringer, Bernard; Hron, Joseph; Lebzelter, Thomas; Chiavassa, Andrea; Corradi, Romano; Harries, Tim
2008-01-01
Classically, optical and near-infrared interferometry have relied on closure phase techniques to produce images. Such techniques allow us to achieve modest dynamic ranges. In order to test the feasibility of next generation optical interferometers in the context of the VLTI-spectro-imager (VSI), we have embarked on a study of image reconstruction and analysis. Our main aim was to test the influence of the number of telescopes, observing nights and distribution of the visibility points on the quality of the reconstructed images. Our results show that observations using six Auxiliary Telescopes (ATs) during one complete night yield the best results in general and is critical in most science cases; the number of telescopes is the determining factor in the image reconstruction outcome. In terms of imaging capabilities, an optical, six telescope VLTI-type configuration and ~200 meter baseline will achieve 4 mas spatial resolution, which is comparable to ALMA and almost 50 times better than JWST will achieve at 2.2...
Projective 3D-reconstruction of Uncalibrated Endoscopic Images
P. Faltin
2010-01-01
Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.
A novel data processing technique for image reconstruction of penumbral imaging
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
Three-dimensional reconstruction of functional brain images
Inoue, Masato; Shoji, Kazuhiko; Kojima, Hisayoshi; Hirano, Shigeru; Naito, Yasushi; Honjo, Iwao [Kyoto Univ. (Japan)
1999-08-01
We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: routine images by SPM, three-dimensional static images, and three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface
Mareboyana, Manohar; Le Moigne, Jacqueline; Bennett, Jerome
2016-05-01
In this paper, we demonstrate simple algorithms that project low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithms are very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. are used in projection. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML) algorithms. The algorithms are robust and are not overly sensitive to the registration inaccuracies.
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Reconstruction of 3d Digital Image of Weepingforsythia Pollen
Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina
Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).
Integrated imaging of neuromagnetic reconstructions and morphological magnetic resonance data.
Kullmann, W H; Fuchs, M
1991-01-01
New neuromagnetic imaging methods provide spatial information about the functional electrical properties of complex current distributions in the human brain. For practical use in medical diagnosis a combination of the abstract neuromagnetic imaging results with magnetic resonance (MR) or computed tomography (CT) images of the morphology is required. The biomagnetic images can be overlayed onto three-dimensional morphological images with spatially arbitrary selectable slices, calculated from conventional 2D data. For the current reconstruction the 3D images furthermore provide a priori information about the conductor geometry. A combination of current source density calculations and linear estimation methods for handling the inverse magnetic problem allows quick imaging of impressed current source density in arbitrary volume conductors.
Huang, Chao; Oraevsky, Alexander A.; Anastasio, Mark A.
2010-08-01
Optoacoustic tomography (OAT) is an emerging ultrasound-mediated biophotonic imaging modality that has exciting potential for many biomedical imaging applications. There is great interest in conducting B-mode ultrasound and OAT imaging studies for breast cancer detection using a common transducer. In this situation, the range of tomographic view angles is limited, which can result in distortions in the reconstructed OAT image if conventional reconstruction algorithms are applied to limited-view measurement data. In this work, we investigate an image reconstruction method that utilizes information regarding target boundaries to improve the quality of the reconstructed OAT images. This is accomplished by developing boundary-constrained image reconstruction algorithm for OAT based on Bayesian image reconstruction theory. The computer-simulation studies demonstrate that the Bayesian approach can effectively reduce the artifact and noise levels and preserve the edges in reconstructed limited-view OAT images as compared to those produced by a conventional OAT reconstruction algorithm.
Statistical reconstruction algorithms for continuous wave electron spin resonance imaging
Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon
2013-06-01
Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.
Groussin, Mathieu; Hobbs, Joanne K; Szöllősi, Gergely J; Gribaldo, Simonetta; Arcus, Vickery L; Gouy, Manolo
2015-01-01
The resurrection of ancestral proteins provides direct insight into how natural selection has shaped proteins found in nature. By tracing substitutions along a gene phylogeny, ancestral proteins can be reconstructed in silico and subsequently synthesized in vitro. This elegant strategy reveals the complex mechanisms responsible for the evolution of protein functions and structures. However, to date, all protein resurrection studies have used simplistic approaches for ancestral sequence reconstruction (ASR), including the assumption that a single sequence alignment alone is sufficient to accurately reconstruct the history of the gene family. The impact of such shortcuts on conclusions about ancestral functions has not been investigated. Here, we show with simulations that utilizing information on species history using a model that accounts for the duplication, horizontal transfer, and loss (DTL) of genes statistically increases ASR accuracy. This underscores the importance of the tree topology in the inference of putative ancestors. We validate our in silico predictions using in vitro resurrection of the LeuB enzyme for the ancestor of the Firmicutes, a major and ancient bacterial phylum. With this particular protein, our experimental results demonstrate that information on the species phylogeny results in a biochemically more realistic and kinetically more stable ancestral protein. Additional resurrection experiments with different proteins are necessary to statistically quantify the impact of using species tree-aware gene trees on ancestral protein phenotypes. Nonetheless, our results suggest the need for incorporating both sequence and DTL information in future studies of protein resurrections to accurately define the genotype-phenotype space in which proteins diversify.
Constrain static target kinetic iterative image reconstruction for 4D cardiac CT imaging
Alessio, Adam M.; La Riviere, Patrick J.
2011-03-01
Iterative image reconstruction offers improved signal to noise properties for CT imaging. A primary challenge with iterative methods is the substantial computation time. This computation time is even more prohibitive in 4D imaging applications, such as cardiac gated or dynamic acquisition sequences. In this work, we propose only updating the time-varying elements of a 4D image sequence while constraining the static elements to be fixed or slowly varying in time. We test the method with simulations of 4D acquisitions based on measured cardiac patient data from a) a retrospective cardiac-gated CT acquisition and b) a dynamic perfusion CT acquisition. We target the kinetic elements with one of two methods: 1) position a circular ROI on the heart, assuming area outside ROI is essentially static throughout imaging time; and 2) select varying elements from the coefficient of variation image formed from fast analytic reconstruction of all time frames. Targeted kinetic elements are updated with each iteration, while static elements remain fixed at initial image values formed from the reconstruction of data from all time frames. Results confirm that the computation time is proportional to the number of targeted elements; our simulations suggest that 3 times reductions in reconstruction time. The images reconstructed with the proposed method have matched mean square error with full 4D reconstruction. The proposed method is amenable to most optimization algorithms and offers the potential for significant computation improvements, which could be traded off for more sophisticated system models or penalty terms.
Reconstruction of Cochlea Based on Micro-CT and Histological Images of the Human Inner Ear
Christos Bellos
2014-01-01
Full Text Available The study of the normal function and pathology of the inner ear has unique difficulties as it is inaccessible during life and, so, conventional techniques of pathologic studies such as biopsy and surgical excision are not feasible, without further impairing function. Mathematical modelling is therefore particularly attractive as a tool in researching the cochlea and its pathology. The first step towards efficient mathematical modelling is the reconstruction of an accurate three dimensional (3D model of the cochlea that will be presented in this paper. The high quality of the histological images is being exploited in order to extract several sections of the cochlea that are not visible on the micro-CT (mCT images (i.e., scala media, spiral ligament, and organ of Corti as well as other important sections (i.e., basilar membrane, Reissner membrane, scala vestibule, and scala tympani. The reconstructed model is being projected in the centerline of the coiled cochlea, extracted from mCT images, and represented in the 3D space. The reconstruction activities are part of the SIFEM project, which will result in the delivery of an infrastructure, semantically interlinking various tools and libraries (i.e., segmentation, reconstruction, and visualization tools with the clinical knowledge, which is represented by existing data, towards the delivery of a robust multiscale model of the inner ear.
Polarimetric ISAR: Simulation and image reconstruction
Chambers, David H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-03-21
In polarimetric ISAR the illumination platform, typically airborne, carries a pair of antennas that are directed toward a fixed point on the surface as the platform moves. During platform motion, the antennas maintain their gaze on the point, creating an effective aperture for imaging any targets near that point. The interaction between the transmitted fields and targets (e.g. ships) is complicated since the targets are typically many wavelengths in size. Calculation of the field scattered from the target typically requires solving Maxwell’s equations on a large three-dimensional numerical grid. This is prohibitive to use in any real-world imaging algorithm, so the scattering process is typically simplified by assuming the target consists of a cloud of independent, non-interacting, scattering points (centers). Imaging algorithms based on this scattering model perform well in many applications. Since polarimetric radar is not very common, the scattering model is often derived for a scalar field (single polarization) where the individual scatterers are assumed to be small spheres. However, when polarization is important, we must generalize the model to explicitly account for the vector nature of the electromagnetic fields and its interaction with objects. In this note, we present a scattering model that explicitly includes the vector nature of the fields but retains the assumption that the individual scatterers are small. The response of the scatterers is described by electric and magnetic dipole moments induced by the incident fields. We show that the received voltages in the antennas are linearly related to the transmitting currents through a scattering impedance matrix that depends on the overall geometry of the problem and the nature of the scatterers.
Last, Carsten
2017-01-01
This book proposes a new approach to handle the problem of limited training data. Common approaches to cope with this problem are to model the shape variability independently across predefined segments or to allow artificial shape variations that cannot be explained through the training data, both of which have their drawbacks. The approach presented uses a local shape prior in each element of the underlying data domain and couples all local shape priors via smoothness constraints. The book provides a sound mathematical foundation in order to embed this new shape prior formulation into the well-known variational image segmentation framework. The new segmentation approach so obtained allows accurate reconstruction of even complex object classes with only a few training shapes at hand.
Li, Li; Xiao, Wei; Jian, Weijian
2014-11-20
Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.
Haim Krissi
Full Text Available OBJECTIVE: To evaluate the differences between the in-office and intraoperative techniques used to evaluate pelvic organ prolapse. MATERIALS AND METHODS: A prospective study included 25 women undergoing vaginal reconstruction surgery including vaginal hysterectomy for pelvic organ prolapse. The outpatient pelvic and site-specific vaginal examination was performed in the lithotomy position with the Valsalva maneuver. Repeated intraoperative examination was performed under general anesthesia with standard mild cervical traction. The Pelvic Organ Prolapse Quantification system (POPQ was used for both measurements and staging. The values found under the two conditions were compared. RESULTS: The intraoperative POPQ-measurements values were significantly higher than the outpatient values for apical wall prolapse in 17/25 (68% women and for anterior wall prolapse in 8/25 (32% women. There was not a significant difference in the posterior wall where increase in staging was shown in 3/25 (12% patients. CONCLUSIONS: Clinicians and patients should be alert to the possibility that pelvic organ measurements performed under general anesthesia with mild traction may be different from preoperative evaluation.
3D RECONSTRUCTION FROM MULTI-VIEW MEDICAL X-RAY IMAGES – REVIEW AND EVALUATION OF EXISTING METHODS
S. Hosseinian
2015-12-01
Full Text Available The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT scan and magnetic resonance imaging (MRI have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT. Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform
Archibald, Richard K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gelb, Anne [Arizona State Univ., Mesa, AZ (United States); Platte, Rodrigo [Arizona State Univ., Mesa, AZ (United States)
2015-09-09
Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l^{1} regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l^{1} regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l^{1} regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l^{1} regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.
Toward 5D image reconstruction for optical interferometry
Baron, Fabien; Kloppenborg, Brian; Monnier, John
2012-07-01
We report on our progress toward a flexible image reconstruction software for optical interferometry capable of "5D imaging" of stellar surfaces. 5D imaging is here defined as the capability to image directly one or several stars in three dimensions, with both the time and wavelength dependencies taken into account during the reconstruction process. Our algorithm makes use of the Healpix (Gorski et al., 2005) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid geometry, and the Open Compute Language (OpenCL) framework for all other computations. We use the Monte Carlo Markov Chain software SQUEEZE to solve the image reconstruction problem on the surfaces of these stars. Finally, the Compressed Sensing and Bayesian Evidence paradigms are employed to determine the best regularization for spotted stars. Our algorithm makes use of the Healpix (reference needed) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid, and the Open Compute Language (OpenCL) framework to model the Roche gravitational potential equation.
Image Reconstruction for Invasive ERT in Vertical Oil Well Logging
周海力; 徐立军; 曹章; 胡金海; 刘兴斌
2012-01-01
An invasive electrical resistance tomographic sensor was proposed for production logging in vertical oil well.The sensor consists of 24 electrodes that are fixed to the logging tool,which can move in the pipeline to acquire data on the conductivity distribution of oil/water mixture flow at different depths.A sensitivity-based algorithm was introduced to reconstruct the cross-sectional images.Analysis on the sensitivity of the sensor to the distribution of oil/water mixture flow was carried out to optimize the position of the imaging cross-section.The imaging results obtained using various boundary conditions at the pipe wall and the logging tool were compared.Eight typical models with various conductivity distributions were created and the measurement data were obtained by solving the forward problem of the sensor system.Image reconstruction was then implemented by using the simulation data for each model.Comparisons between the models and the reconstructed images show that the number and spatial distribution of the oil bubbles can be clearly identified.
3D reconstruction of multiple stained histology images
Yi Song
2013-01-01
Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.
Proton Computed Tomography: iterative image reconstruction and dose evaluation
Civinini, C.; Bonanno, D.; Brianzi, M.; Carpinelli, M.; Cirrone, G. A. P.; Cuttone, G.; Lo Presti, D.; Maccioni, G.; Pallotta, S.; Randazzo, N.; Scaringella, M.; Romano, F.; Sipala, V.; Talamonti, C.; Vanzi, E.; Bruzzi, M.
2017-01-01
Proton Computed Tomography (pCT) is a medical imaging method with a potential for increasing accuracy of treatment planning and patient positioning in hadron therapy. A pCT system based on a Silicon microstrip tracker and a YAG:Ce crystal calorimeter has been developed within the INFN Prima-RDH collaboration. The prototype has been tested with a 175 MeV proton beam at The Svedberg Laboratory (Uppsala, Sweden) with the aim to reconstruct and characterize a tomographic image. Algebraic iterative reconstruction methods (ART), together with the most likely path formalism, have been used to obtain tomographies of an inhomogeneous phantom to eventually extract density and spatial resolutions. These results will be presented and discussed together with an estimation of the average dose delivered to the phantom and the dependence of the image quality on the dose. Due to the heavy computation load required by the algebraic algorithms the reconstruction programs have been implemented to fully exploit the high calculation parallelism of Graphics Processing Units. An extended field of view pCT system is in an advanced construction stage. This apparatus will be able to reconstruct objects of the size of a human head making possible to characterize this pCT approach in a pre-clinical environment.
PET Image Reconstruction Using Information Theoretic Anatomical Priors
Somayajula, Sangeetha; Panagiotou, Christos; Rangarajan, Anand; Li, Quanzheng; Arridge, Simon R.
2011-01-01
We describe a nonparametric framework for incorporating information from co-registered anatomical images into positron emission tomographic (PET) image reconstruction through priors based on information theoretic similarity measures. We compare and evaluate the use of mutual information (MI) and joint entropy (JE) between feature vectors extracted from the anatomical and PET images as priors in PET reconstruction. Scale-space theory provides a framework for the analysis of images at different levels of detail, and we use this approach to define feature vectors that emphasize prominent boundaries in the anatomical and functional images, and attach less importance to detail and noise that is less likely to be correlated in the two images. Through simulations that model the best case scenario of perfect agreement between the anatomical and functional images, and a more realistic situation with a real magnetic resonance image and a PET phantom that has partial volumes and a smooth variation of intensities, we evaluate the performance of MI and JE based priors in comparison to a Gaussian quadratic prior, which does not use any anatomical information. We also apply this method to clinical brain scan data using F18 Fallypride, a tracer that binds to dopamine receptors and therefore localizes mainly in the striatum. We present an efficient method of computing these priors and their derivatives based on fast Fourier transforms that reduce the complexity of their convolution-like expressions. Our results indicate that while sensitive to initialization and choice of hyperparameters, information theoretic priors can reconstruct images with higher contrast and superior quantitation than quadratic priors. PMID:20851790
Fast and accurate generation method of PSF-based system matrix for PET reconstruction
Sun, Xiao-Li; Yun, Ming-Kai; Li, Dao-Wu; Gao, Juan; Li, Mo-Han; Chai, Pei; Tang, Hao-Hui; Zhang, Zhi-Ming; Wei, Long
2016-01-01
Positional single photon incidence response (P-SPIR) theory is researched in this paper to generate more accurate PSF-contained system matrix simply and quickly. The method has been proved highly effective to improve the spatial resolution by applying to the Eplus-260 primate PET designed by the Institute of High Energy Physics of the Chinese Academy of Sciences(IHEP). Simultaneously, to meet the clinical needs, GPU acceleration is put to use. Basically, P-SPIR theory takes both incidence angle and incidence position by crystal subdivision instead of only incidence angle into consideration based on Geant4 Application for Emission Tomography (GATE). The simulation conforms to the actual response distribution and can be completed rapidly within less than 1s. Furthermore,two-block penetration and normalization of the response probability are raised to fit the reality. With PSF obtained, the homogenization model is analyzed to calculate the spread distribution of bins within a few minutes for system matrix genera...
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
The SRT reconstruction algorithm for semiquantification in PET imaging
Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)
2015-10-15
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
List-mode MLEM Image Reconstruction from 3D ML Position Estimates.
Caucci, Luca; Hunter, William C J; Furenlid, Lars R; Barrett, Harrison H
2010-10-01
Current thick detectors used in medical imaging allow recording many attributes, such as the 3D location of interaction within the scintillation crystal and the amount of energy deposited. An efficient way of dealing with these data is by storing them in list-mode (LM). To reconstruct the data, maximum-likelihood expectation-maximization (MLEM) is efficiently applied to the list-mode data, resulting in the list-mode maximum-likelihood expectation-maximization (LMMLEM) reconstruction algorithm.In this work, we consider a PET system consisting of two thick detectors facing each other. PMT outputs are collected for each coincidence event and are used to perform 3D maximum-likelihood (ML) position estimation of location of interaction. The mathematical properties of the ML estimation allow accurate modeling of the detector blur and provide a theoretical framework for the subsequent estimation step, namely the LMMLEM reconstruction. Indeed, a rigorous statistical model for the detector output can be obtained from calibration data and used in the calculation of the conditional probability density functions for the interaction location estimates.Our implementation of the 3D ML position estimation takes advantage of graphics processing unit (GPU) hardware and permits accurate real-time estimates of position of interaction. The LMMLEM algorithm is then applied to the list of position estimates, and the 3D radiotracer distribution is reconstructed on a voxel grid.
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Kim, Insoo; Han, Bumsoo [EB Tech, Co., Ltd., 550 Yongsan-dong, Yuseong-gu, Daejeon 305-500 (Korea, Republic of)
2015-11-15
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the
Light field display and 3D image reconstruction
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Accuracy of quantitative reconstructions in SPECT/CT imaging
Shcherbinin, S; Celler, A [Department of Radiology, University of British Columbia, 366-828 West 10th Avenue, Vancouver BC, V5Z 1L8 (Canada); Belhocine, T; Vanderwerf, R; Driedger, A [Department of Nuclear Medicine, London Health Sciences Centre, 375 South Street, PO Box 5375, London ON, N6A 4G5 (Canada)], E-mail: shcher2@interchange.ubc.ca
2008-09-07
The goal of this study was to determine the quantitative accuracy of our OSEM-APDI reconstruction method based on SPECT/CT imaging for Tc-99m, In-111, I-123, and I-131 isotopes. Phantom studies were performed on a SPECT/low-dose multislice CT system (Infinia-Hawkeye-4 slice, GE Healthcare) using clinical acquisition protocols. Two radioactive sources were centrally and peripherally placed inside an anthropometric Thorax phantom filled with non-radioactive water. Corrections for attenuation, scatter, collimator blurring and collimator septal penetration were applied and their contribution to the overall accuracy of the reconstruction was evaluated. Reconstruction with the most comprehensive set of corrections resulted in activity estimation with error levels of 3-5% for all the isotopes.
Complications of anterior cruciate ligament reconstruction: MR imaging.
Papakonstantinou, Olympia; Chung, Christine B; Chanchairujira, Kullanuch; Resnick, Donald L
2003-05-01
Arthroscopic reconstruction of the anterior cruciate ligament (ACL) using autografts or allografts is being performed with increasing frequency, particularly in young athletes. Although the procedure is generally well tolerated, with good success rates, early and late complications have been documented. As clinical manifestations of graft complications are often non-specific and plain radiographs cannot directly visualize the graft and the adjacent soft tissues, MR imaging has a definite role in the diagnosis of complications after ACL reconstruction and may direct subsequent therapeutic management. Our purpose is to review the normal MR imaging of the ACL graft and present the MR imaging findings of a wide spectrum of complications after ACL reconstruction, such as graft impingement, graft rupture, cystic degeneration of the graft, postoperative infection of the knee, diffuse and localized (i.e., cyclops lesion) arthrofibrosis, and associated donor site abnormalities. Awareness of the MR imaging findings of complications as well as the normal appearances of the normal ACL graft is essential for correct interpretation.
Missing data reconstruction using Gaussian mixture models for fingerprint images
Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary
2016-05-01
Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.
Holographic images reconstructed from GMR-based fringe pattern
Kikuchi Hiroshi
2013-01-01
Full Text Available We have developed a magneto-optical spatial light modulator (MOSLM using giant magneto-resistance (GMR structures for realizing a holographic three-dimensional (3D display. For practical applications, reconstructed image of hologram consisting of GMR structures should be investigated in order to study the feasibility of the MOSLM. In this study, we fabricated a hologram with GMR based fringe-pattern and demonstrated a reconstructed image. A fringe-pattern convolving a crossshaped image was calculated by a conventional binary computer generated hologram (CGH technique. The CGH-pattern has 2,048 × 2,048 with 5 μm pixel pitch. The GMR stack consists of a Tb-Fe-Co/CoFe pinned layer, a Ag spacer, a Gd-Fe free layer for light modulation, and a Ru capping layer, was deposited by dc-magnetron sputtering. The GMR hologram was formed using photo-lithography and Krion milling processes, followed by the deposition of a Tb-Fe-Co reference layer with large coercivity and the same Kerr-rotation angle compared to the free layer, and a lift-off process. The reconstructed image of the ON-state was clearly observed and successfully distinguished from the OFF-state by switching the magnetization direction of the free-layer with an external magnetic field. These results indicate the possibility of realizing a holographic 3D display by the MOSLM using the GMR structures.
Complications of anterior cruciate ligament reconstruction: MR imaging
Papakonstantinou, Olympia; Chung, Christine B.; Chanchairujira, Kullanuch; Resnick, Donald L. [Department of Radiology, Veterans Affairs Medical Center, University of California, 3350 La Jolla Village Dr., San Diego, CA 92161 (United States)
2003-05-01
Arthroscopic reconstruction of the anterior cruciate ligament (ACL) using autografts or allografts is being performed with increasing frequency, particularly in young athletes. Although the procedure is generally well tolerated, with good success rates, early and late complications have been documented. As clinical manifestations of graft complications are often non-specific and plain radiographs cannot directly visualize the graft and the adjacent soft tissues, MR imaging has a definite role in the diagnosis of complications after ACL reconstruction and may direct subsequent therapeutic management. Our purpose is to review the normal MR imaging of the ACL graft and present the MR imaging findings of a wide spectrum of complications after ACL reconstruction, such as graft impingement, graft rupture, cystic degeneration of the graft, postoperative infection of the knee, diffuse and localized (i.e., cyclops lesion) arthrofibrosis, and associated donor site abnormalities. Awareness of the MR imaging findings of complications as well as the normal appearances of the normal ACL graft is essential for correct interpretation. (orig.)
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.
Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H
2015-11-01
Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.
Rozario, T; Bereg, S [University of Texas at Dallas, Richardson, TX (United States); Chiu, T; Liu, H; Kearney, V; Jiang, L; Mao, W [UT Southwestern Medical Center, Dallas, TX (United States)
2014-06-01
Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix of small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.
Sharp, J. H.; Barnard, J. S.; Kaneko, K.; Higashida, K.; Midgley, P. A.
2008-08-01
After previous work producing a successful 3D tomographic reconstruction of dislocations in GaN from conventional weak-beam dark-field (WBDF) images, we have reconstructed a cascade of dislocations in deformed and annealed silicon to a comparable standard using the more experimentally straightforward technique of STEM annular dark-field imaging (STEM ADF). In this mode, image contrast was much more consistent over the specimen tilt range than in conventional weak-beam dark-field imaging. Automatic acquisition software could thus restore the correct dislocation array to the field of view at each tilt angle, though manual focusing was still required. Reconstruction was carried out by sequential iterative reconstruction technique using FEI's Inspect3D software. Dislocations were distributed non-uniformly along cascades, with sparse areas between denser clumps in which individual dislocations of in-plane image width 24 nm could be distinguished in images and reconstruction. Denser areas showed more complicated stacking-fault contrast, hampering tomographic reconstruction. The general three-dimensional form of the denser areas was reproduced well, showing the dislocation array to be planar and not parallel to the foil surfaces.
Sharp, J H; Barnard, J S; Midgley, P A [Department of Materials Science, University of Cambridge, Pembroke Street, Cambridge, CB2 3QZ (United Kingdom); Kaneko, K; Higashida, K [Department of Materials Science and Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan)], E-mail: jhd28@cam.ac.uk
2008-08-15
After previous work producing a successful 3D tomographic reconstruction of dislocations in GaN from conventional weak-beam dark-field (WBDF) images, we have reconstructed a cascade of dislocations in deformed and annealed silicon to a comparable standard using the more experimentally straightforward technique of STEM annular dark-field imaging (STEM ADF). In this mode, image contrast was much more consistent over the specimen tilt range than in conventional weak-beam dark-field imaging. Automatic acquisition software could thus restore the correct dislocation array to the field of view at each tilt angle, though manual focusing was still required. Reconstruction was carried out by sequential iterative reconstruction technique using FEI's Inspect3D software. Dislocations were distributed non-uniformly along cascades, with sparse areas between denser clumps in which individual dislocations of in-plane image width 24 nm could be distinguished in images and reconstruction. Denser areas showed more complicated stacking-fault contrast, hampering tomographic reconstruction. The general three-dimensional form of the denser areas was reproduced well, showing the dislocation array to be planar and not parallel to the foil surfaces.
A dual oxygenation and fluorescence imaging platform for reconstructive surgery
Ashitate, Yoshitomo; Nguyen, John N.; Venugopal, Vivek; Stockdale, Alan; Neacsu, Florin; Kettenring, Frank; Lee, Bernard T.; Frangioni, John V.; Gioux, Sylvain
2013-03-01
There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively, leading to a large number of failures, patient morbidity, and increased healthcare costs. Because near-infrared (NIR) optical imaging is safe, noncontact, inexpensive, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. These capabilities are well illustrated through the clinical translation of fluorescence imaging during oncologic surgery. In this work, we introduce a novel imaging platform that combines two complementary NIR optical modalities: oxygenation imaging and fluorescence imaging. We validated this platform during facial reconstructive surgery on large animals approaching the size of humans. We demonstrate that NIR fluorescence imaging provides identification of perforator arteries, assesses arterial perfusion, and can detect thrombosis, while oxygenation imaging permits the passive monitoring of tissue vital status, as well as the detection and origin of vascular compromise simultaneously. Together, the two methods provide a comprehensive approach to identifying problems and intervening in real time during surgery before irreparable damage occurs. Taken together, this novel platform provides fully integrated and clinically friendly endogenous and exogenous NIR optical imaging for improved image-guided intervention during surgery.
Arne Vladimir Blackman
2014-07-01
Full Text Available Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH. An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP, however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost.
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers
An efficient simultaneous reconstruction technique for tomographic particle image velocimetry
Atkinson, Callum; Soria, Julio
2009-10-01
To date, Tomo-PIV has involved the use of the multiplicative algebraic reconstruction technique (MART), where the intensity of each 3D voxel is iteratively corrected to satisfy one recorded projection, or pixel intensity, at a time. This results in reconstruction times of multiple hours for each velocity field and requires considerable computer memory in order to store the associated weighting coefficients and intensity values for each point in the volume. In this paper, a rapid and less memory intensive reconstruction algorithm is presented based on a multiplicative line-of-sight (MLOS) estimation that determines possible particle locations in the volume, followed by simultaneous iterative correction. Reconstructions of simulated images are presented for two simultaneous algorithms (SART and SMART) as well as the now standard MART algorithm, which indicate that the same accuracy as MART can be achieved 5.5 times faster or 77 times faster with 15 times less memory if the processing and storage of the weighting matrix is considered. Application of MLOS-SMART and MART to a turbulent boundary layer at Re θ = 2200 using a 4 camera Tomo-PIV system with a volume of 1,000 × 1,000 × 160 voxels is discussed. Results indicate improvements in reconstruction speed of 15 times that of MART with precalculated weighting matrix, or 65 times if calculation of the weighting matrix is considered. Furthermore the memory needed to store a large weighting matrix and volume intensity is reduced by almost 40 times in this case.
Accurate non-invasive image-based cytotoxicity assays for cultured cells
Brouwer Jaap
2010-06-01
Full Text Available Abstract Background The CloneSelect™ Imager system is an image-based visualisation system for cell growth assessment. Traditionally cell proliferation is measured with the colorimetric MTT assay. Results Here we show that both the CloneSelect Imager and the MTT approach result in comparable EC50 values when assaying the cytotoxicity of cisplatin and oxaliplatin on various cell lines. However, the image-based technique was found non-invasive, considerably quicker and more accurate than the MTT assay. Conclusions This new image-based technique has the potential to replace the cumbersome MTT assay when fast, unbiased and high-throughput cytotoxicity assays are requested.
Research on image matching method of big data image of three-dimensional reconstruction
Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong
2015-12-01
Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.
Electromagnetic Model and Image Reconstruction Algorithms Based on EIT System
CAO Zhang; WANG Huaxiang
2006-01-01
An intuitive 2 D model of circular electrical impedance tomography ( EIT) sensor with small size electrodes is established based on the theory of analytic functions.The validation of the model is proved using the result from the solution of Laplace equation.Suggestions on to electrode optimization and explanation to the ill-condition property of the sensitivity matrix are provided based on the model,which takes electrode distance into account and can be generalized to the sensor with any simple connected region through a conformal transformation.Image reconstruction algorithms based on the model are implemented to show feasibility of the model using experimental data collected from the EIT system developed in Tianjin University.In the simulation with a human chestlike configuration,electrical conductivity distributions are reconstructed using equi-potential backprojection (EBP) and Tikhonov regularization (TR) based on a conformal transformation of the model.The algorithms based on the model are suitable for online image reconstruction and the reconstructed results are good both in size and position.
Stokes image reconstruction for two-color microgrid polarization imaging systems.
Lemaster, Daniel A
2011-07-18
The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.
PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES
E. Maset
2017-08-01
Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
Photogrammetric 3d Building Reconstruction from Thermal Images
Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.
2017-08-01
This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.
2017-04-01
We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.
Fan beam image reconstruction with generalized Fourier slice theorem.
Zhao, Shuangren; Yang, Kang; Yang, Kevin
2014-01-01
For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N^3), where N is the number of pixel in one dimension.
Jing Huang
Full Text Available X-ray computed tomography (CT iterative image reconstruction from sparse-view projection data has been an important research topic for radiation reduction in clinic. In this paper, to relieve the requirement of misalignment reduction operation of the prior image constrained compressed sensing (PICCS approach introduced by Chen et al, we present an iterative image reconstruction approach for sparse-view CT using a normal-dose image induced total variation (ndiTV prior. The associative objective function of the present approach is constructed under the penalized weighed least-square (PWLS criteria, which contains two terms, i.e., the weighted least-square (WLS fidelity and the ndiTV prior, and is referred to as "PWLS-ndiTV". Specifically, the WLS fidelity term is built based on an accurate relationship between the variance and mean of projection data in the presence of electronic background noise. The ndiTV prior term is designed to reduce the influence of the misalignment between the desired- and prior- image by using a normal-dose image induced non-local means (ndiNLM filter. Subsequently, a modified steepest descent algorithm is adopted to minimize the associative objective function. Experimental results on two different digital phantoms and an anthropomorphic torso phantom show that the present PWLS-ndiTV approach for sparse-view CT image reconstruction can achieve noticeable gains over the existing similar approaches in terms of noise reduction, resolution-noise tradeoff, and low-contrast object detection.
Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology
Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan
2016-05-01
This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.
Swept-source digital holography to reconstruct tomographic images.
Sheoran, Gyanendra; Dubey, Satish; Anand, Arun; Mehta, Dalip Singh; Shakher, Chandra
2009-06-15
We present what we believe to be a new method of swept-source digital holography using a superluminescent diode (SLD) as a broadband light source and an acousto-optic tunable filter (AOTF) as a frequency tunable device. The swept source consists of an SLD as a broadband source in conjunction with the AOTF as the frequency tuning device in the wavelength range of 800-870 nm. Since the AOTF is an electronically controlled device, frequency tuning can be achieved without mechanical movement . The angular spectrum approach to the scalar diffraction theory is used to reconstruct the images for each wavelength. Applications of a broadband source ensure an increased axial resolution of reconstructed images. The proposed swept-source system provides a sufficiently broad range of tunability and can increase the axial range and the resolution of reconstructed tomographic images using digital holography. The system was tested using a semireflecting glass substrate; a character "B" is written on it with black ink. Experimental results are presented.
Computed tomography image reconstruction from only two projections
Mohammad-Djafari, Ali
2007-01-01
English: This paper concerns the image reconstruction from a few projections in Computed Tomography (CT). The main objective of this paper is to show that the problem is so ill posed that no classical method, such as analytical methods based on inverse Radon transform, nor the algebraic methods such as Least squares (LS) or regularization theory can give satisfactory result. As an example, we consider in detail the case of image reconstruction from two horizontal and vertical projections. We then show how a particular composite Markov modeling and the Bayesian estimation framework can possibly propose satisfactory solutions to the problem. For demonstration and educational purpose a set of Matlab programs are given for a live presentation of the results. ----- French: Ce travail, \\`a but p\\'edagogique, pr\\'esente le probl\\`eme inverse de la reconstruction d'image en tomographie X lorsque le nombre des projections est tr\\`es limit\\'e. voir le texte en Anglais et en Fran\\c{c}ais.
Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau
2012-05-01
This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.
Jung, Jae-Hyun; Hong, Keehoon; Park, Gilbae; Chung, Indeok; Park, Jae-Hyeung; Lee, Byoungho
2010-12-06
We proposed a reconstruction method for the occluded region of three-dimensional (3D) object using the depth extraction based on the optical flow and triangular mesh reconstruction in integral imaging. The depth information of sub-images from the acquired elemental image set is extracted using the optical flow with sub-pixel accuracy, which alleviates the depth quantization problem. The extracted depth maps of sub-image array are segmented by the depth threshold from the histogram based segmentation, which is represented as the point clouds. The point clouds are projected to the viewpoint of center sub-image and reconstructed by the triangular mesh reconstruction. The experimental results support the validity of the proposed method with high accuracy of peak signal-to-noise ratio and normalized cross-correlation in 3D image recognition.
Impact of measurement precision and noise on superresolution image reconstruction.
Wood, Sally L; Lee, Shu-Ting; Yang, Gao; Christensen, Marc P; Rajan, Dinesh
2008-04-01
The performance of uniform and nonuniform detector arrays for application to the PANOPTES (processing arrays of Nyquist-limited observations to produce a thin electro-optic sensor) flat camera design is analyzed for measurement noise environments including quantization noise and Gaussian and Poisson processes. Image data acquired from a commercial camera with 8 bit and 14 bit output options are analyzed, and estimated noise levels are computed. Noise variances estimated from the measurement values are used in the optimal linear estimators for superresolution image reconstruction.
A study of image reconstruction algorithms for hybrid intensity interferometers
Crabtree, Peter N.; Murray-Krezan, Jeremy; Picard, Richard H.
2011-09-01
Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope. Partially resolved imagery provides an additional constraint for the iterative phase retrieval process, as well as an improved starting point. The benefits of this additional a priori information are explored and include lower residual phase error for SNR values above 0.01, increased sensitivity, and improved image quality. Results are also presented for image reconstruction from II measurements alone, via current state-of-the-art phase retrieval techniques. These results are based on the standard hybrid input-output (HIO) algorithm, as well as a recent enhancement to HIO that optimizes step lengths in addition to step directions. The additional step length optimization yields a reduction in residual phase error, but only for SNR values greater than about 10. Image quality for all algorithms studied is quite good for SNR>=10, but it should be noted that the studied phase-recovery techniques yield useful information even for SNRs that are much lower.
Xuemiao Xu
2016-04-01
Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.
3D CAD model reconstruction of a human femur from MRI images
Benaissa EL FAHIME
2013-05-01
Full Text Available Medical practice and life sciences take full advantage of progress in engineering disciplines, in particular the computer assisted placement technique in hip surgery. This paper describes the three dimensional model reconstruction of human femur from MRI images. The developed program enables to obtain digital shape of 3D femur recognized by all CAD software and allows an accurate placement of the femoral component. This technic provides precise measurement of implant alignment during hip resurfacing or total hip arthroplasty, thereby reducing the risk of component mal-positioning and femoral neck notching.
Yoo, Hoon; Jang, Jae-Young
2017-10-01
We propose a novel approach for intermediate elemental image reconstruction in integral imaging. To reconstruct intermediate elemental images, we introduce a null elemental image whose pixels are all zero. In the proposed method a number of null elemental images are inserted into a given elemental image array. The elemental image array with null elemental images is convolved with the δ-function sequence. The convolution result shows that the proposed method provides an efficient structure to expand an elemental image array. The resulting elemental image array from the proposed method can supply three-dimensional information for an object at a specific depth. In addition, the proposed method provides adjustable parameters, which can be utilized in design of integral imaging systems. The feasibility of the proposed method has been confirmed through preliminary experiments and theoretical analysis.
Hierarchical Bayesian sparse image reconstruction with application to MRFM
Dobigeon, Nicolas; Tourneret, Jean-Yves
2008-01-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstr...
Isotope specific resolution recovery image reconstruction in high resolution PET imaging.
Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib
2014-05-01
Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The
Scheins, J., E-mail: j.scheins@fz-juelich.de [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Leo-Brandt-Str., 52425 Jülich (Germany); Ullisch, M.; Tellmann, L.; Weirich, C.; Rota Kops, E.; Herzog, H.; Shah, N.J. [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Leo-Brandt-Str., 52425 Jülich (Germany)
2013-02-21
The BrainPET scanner from Siemens, designed as hybrid MR/PET system for simultaneous acquisition of both modalities, provides high-resolution PET images with an optimum resolution of 3 mm. However, significant head motion often compromises the achievable image quality, e.g. in neuroreceptor studies of human brain. This limitation can be omitted when tracking the head motion and accurately correcting measured Lines-of-Response (LORs). For this purpose, we present a novel method, which advantageously combines MR-guided motion tracking with the capabilities of the reconstruction software PRESTO (PET Reconstruction Software Toolkit) to convert motion-corrected LORs into highly accurate generic projection data. In this way, the high-resolution PET images achievable with PRESTO can also be obtained in presence of severe head motion.
Kainmueller, Dagmar
2014-01-01
? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Chuang, Ching-Cheng; Tsai, Jui-che; Chen, Chung-Ming; Yu, Zong-Han; Sun, Chia-Wei
2012-04-01
Diffuse optical tomography (DOT) is an emerging technique for functional biological imaging. The imaging quality of DOT depends on the imaging reconstruction algorithm. The SIRT has been widely used for DOT image reconstruction but there is no criterion to truncate based on any kind of residual parameter. The iteration loops will always be decided by experimental rule. This work presents the CR calculation that can be great help for SIRT optimization. In this paper, four inhomogeneities with various shapes of absorption distributions are simulated as imaging targets. The images are reconstructed and analyzed based on the simultaneous iterative reconstruction technique (SIRT) method. For optimization between time consumption and imaging accuracy in reconstruction process, the numbers of iteration loop needed to be optimized with a criterion in algorithm, that is, the root mean square error (RMSE) should be minimized in limited iterations. For clinical applications of DOT, the RMSE cannot be obtained because the measured targets are unknown. Thus, the correlations between the RMSE and the convergence rate (CR) in SIRT algorithm are analyzed in this paper. From the simulation results, the parameter CR reveals the related RMSE value of reconstructed images. The CR calculation offers an optimized criterion of iteration process in SIRT algorithm for DOT imaging. Based on the result, the SIRT can be modified with CR calculation for self-optimization. CR reveals an indicator of SIRT image reconstruction in clinical DOT measurement. Based on the comparison result between RMSE and CR, a threshold value of CR (CRT) can offer an optimized number of iteration steps for DOT image reconstruction. This paper shows the feasibility study by utilizing CR criterion for SIRT in simulation and the clinical application of DOT measurement relies on further investigation.
Task-driven image acquisition and reconstruction in cone-beam CT.
Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H
2015-04-21
This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; Browning, Nigel D.
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.
REGION-BASED 3D SURFACE RECONSTRUCTION USING IMAGES ACQUIRED BY LOW-COST UNMANNED AERIAL SYSTEMS
Z. Lari
2015-08-01
Full Text Available Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.
Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems
Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.
2015-08-01
Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-03-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Use of GMM and SCMS for Accurate Road Centerline Extraction from the Classified Image
Zelang Miao
2015-01-01
Full Text Available The extraction of road centerline from the classified image is a fundamental image analysis technology. Common problems encountered in road centerline extraction include low ability for coping with the general case, production of undesired objects, and inefficiency. To tackle these limitations, this paper presents a novel accurate centerline extraction method using Gaussian mixture model (GMM and subspace constraint mean shift (SCMS. The proposed method consists of three main steps. GMM is first used to partition the classified image into several clusters. The major axis of the ellipsoid of each cluster is extracted and deemed to be taken as the initial centerline. Finally, the initial result is adjusted using SCMS to produce precise road centerline. Both simulated and real datasets are used to validate the proposed method. Preliminary results demonstrate that the proposed method provides a comparatively robust solution for accurate centerline extraction from a classified image.
Superresolution image reconstruction using panchromatic and multispectral image fusion
Elbakary, M. I.; Alam, M. S.
2008-08-01
Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that is not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images using co-registered high special-resolution imagery such as panchromatic imagery. In this paper, we propose a new algorithm to enhance the spatial resolution of low resolution hyperspectral bands using strongly correlated and co-registered high special-resolution panchromatic imagery. The proposed algorithm constructs the superresolution bands corresponding to the low resolution bands to enhance the resolution using a global correlation enhancement technique. The global enhancement is based on the least square regression and the histogram matching to improve the estimated interpolation of the spatial resolution. The introduced algorithm is considered as an improvement for Priceâ€™s algorithm which uses the global correlation only for the spatial resolution enhancement. Numerous studies are conducted to investigate the effect of the proposed algorithm for achieving the enhancement compared to the traditional algorithm for superresolution enhancement. Experiments results obtained using hyperspectral data derived from airborne imaging sensor are presented to verify the superiority of the proposed algorithm.
Simultaneous reconstruction and segmentation for dynamic SPECT imaging
Burger, Martin; Rossmanith, Carolin; Zhang, Xiaoqun
2016-10-01
This work deals with the reconstruction of dynamic images that incorporate characteristic dynamics in certain subregions, as arising for the kinetics of many tracers in emission tomography (SPECT, PET). We make use of a basis function approach for the unknown tracer concentration by assuming that the region of interest can be divided into subregions with spatially constant concentration curves. Applying a regularised variational framework reminiscent of the Chan-Vese model for image segmentation we simultaneously reconstruct both the labelling functions of the subregions as well as the subconcentrations within each region. Our particular focus is on applications in SPECT with the Poisson noise model, resulting in a Kullback-Leibler data fidelity in the variational approach. We present a detailed analysis of the proposed variational model and prove existence of minimisers as well as error estimates. The latter apply to a more general class of problems and generalise existing results in literature since we deal with a nonlinear forward operator and a nonquadratic data fidelity. A computational algorithm based on alternating minimisation and splitting techniques is developed for the solution of the problem and tested on appropriately designed synthetic data sets. For those we compare the results to those of standard EM reconstructions and investigate the effects of Poisson noise in the data.
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
Reconstruction of Static Black Hole Images Using Simple Geometric Forms
Benkevitch, Leonid; Lu, Rusen; Doeleman, Shepherd; Fish, Vincent
2016-01-01
General Relativity predicts that the emission close to a black hole must be lensed by its strong gravitational field, illuminating the last photon orbit. This results in a dark circular area known as the black hole 'shadow'. The Event Horizon Telescope (EHT) is a (sub)mm VLBI network capable of Schwarzschild-radius resolution on Sagittarius A* (or Sgr A*), the 4 million solar mass black hole at the Galactic Center. The goals of the Sgr A* observations include resolving and measuring the details of its morphology. However, EHT data are sparse in the visibility domain, complicating reliable detailed image reconstruction. Therefore, direct pixel imaging should be complemented by other approaches. Using simulated EHT data from a black hole emission model we consider an approach to Sgr A* image reconstruction based on a simple and computationally efficient analytical model that produces images similar to the synthetic ones. The model consists of an eccentric ring with a brightness gradient and a two-dimensional Ga...
Lauzier, Pascal Theriault; Tang Jie; Speidel, Michael A.; Chen Guanghong [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705-2275 (United States); Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705-2275 (United States)
2012-07-15
Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise and streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution
Improved proton computed tomography by dual modality image reconstruction
Hansen, David C., E-mail: dch@ki.au.dk; Bassler, Niels [Experimental Clinical Oncology, Aarhus University, 8000 Aarhus C (Denmark); Petersen, Jørgen Breede Baltzer [Medical Physics, Aarhus University Hospital, 8000 Aarhus C (Denmark); Sørensen, Thomas Sangild [Computer Science, Aarhus University, 8000 Aarhus C, Denmark and Clinical Medicine, Aarhus University, 8200 Aarhus N (Denmark)
2014-03-15
Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360
System calibration and image reconstruction for a new small-animal SPECT system
Chen, Yi-Chun
A novel small-animal SPECT imager, FastSPECT II, was recently developed at the Center for Gamma-Ray Imaging. FastSPECT II consists of two rings of eight modular scintillation cameras and list-mode data-acquisition electronics that enable stationary and dynamic imaging studies. The instrument is equipped with exchangeable aperture assemblies and adjustable camera positions for selections of magnifications, pinhole sizes, and fields of view (FOVs). The purpose of SPECT imaging is to recover the radiotracer distribution in the object from the measured image data. Accurate knowledge of the imaging system matrix (referred to as H) is essential for image reconstruction. To assure that all of the system physics is contained in the matrix, experimental calibration methods for the individual cameras and the whole imaging system were developed and carefully performed. The average spatial resolution over the FOV of FastSPECT II in its low-magnification (2.4X) configuration is around 2.4 mm, computed from the Fourier crosstalk matrix. The system sensitivity measured with a 99mTc point source at the center of the FOV is about 267 cps/MBq. The system detectability was evaluated by computing the ideal-observer performance on SKE/BKE (signal-known-exactly/background-known-exactly) detection tasks. To reduce the system-calibration time and achieve finer reconstruction grids, two schemes for interpolating H were implemented and compared: these are centroid interpolation with Gaussian fitting and Fourier interpolation. Reconstructed phantom and mouse-cardiac images demonstrated the effectiveness of the H-matrix interpolation. Tomographic reconstruction can be formulated as a linear inverse problem and solved using statistical-estimation techniques. Several iterative reconstruction algorithms were introduced, including maximum-likelihood expectation-maximization (ML-EM) and its ordered-subsets (OS) version, and some least-squares (LS) and weighted-least-squares (WLS) algorithms such
Yoon, Jihyung; Jung, Jae Won; Kim, Jong Oh; Yi, Byong Yong; Yeo, Inhwan
2016-07-01
A method is proposed to reconstruct a four-dimensional (4D) dose distribution using phase matching of measured cine images to precalculated images of electronic portal imaging device (EPID). (1) A phantom, designed to simulate a tumor in lung (a polystyrene block with a 3 cm diameter embedded in cork), was placed on a sinusoidally moving platform with an amplitude of 1 cm and a period of 4 s. Ten-phase 4D computed tomography (CT) images of the phantom were acquired. A planning target volume (PTV) was created by adding a margin of 1 cm around the internal target volume of the tumor. (2) Three beams were designed, which included a static beam, a theoretical dynamic beam, and a planning-optimized dynamic beam (PODB). While the theoretical beam was made by manually programming a simplistic sliding leaf motion, the planning-optimized beam was obtained from treatment planning. From the three beams, three-dimensional (3D) doses on the phantom were calculated; 4D dose was calculated by means of the ten phase images (integrated over phases afterward); serving as "reference" images, phase-specific EPID dose images under the lung phantom were also calculated for each of the ten phases. (3) Cine EPID images were acquired while the beams were irradiated to the moving phantom. (4) Each cine image was phase-matched to a phase-specific CT image at which common irradiation occurred by intercomparing the cine image with the reference images. (5) Each cine image was used to reconstruct dose in the phase-matched CT image, and the reconstructed doses were summed over all phases. (6) The summation was compared with forwardly calculated 4D and 3D dose distributions. Accounting for realistic situations, intratreatment breathing irregularity was simulated by assuming an amplitude of 0.5 cm for the phantom during a portion of breathing trace in which the phase matching could not be performed. Intertreatment breathing irregularity between the time of treatment and the time of planning CT was
GF-4 Images Super Resolution Reconstruction Based on POCS
XU Lina
2017-08-01
Full Text Available The super resolution reconstruction of GF-4 is made by projection on convex sets (POCS. Papoulis-Gerchberg is used to construct reference frame which can reduce iteration and improve algorithm efficiency.Vandewalle is used to estimate motion parameter which is benefit to block process. Tested and analyzed by real GF-4 series images, it shows that sharpness of super resolution result is positive correlatie to frame amount, and signal to noise ratio (SNR is negative correlate to frame amount. After processing by 5 frames, information entropy (IE changes little; sharpness (average gradient increases from 7.803 to 14.386; SNR reduces a little, from 3.411 to 3.336. The experiment shows that after super resolution reconstruction, sharpness and detail information of results can be greatly improved.
Medical Image Watermarking Technique for Accurate Tamper Detection in ROI and Exact Recovery of ROI.
Eswaraiah, R; Sreenivasa Reddy, E
2014-01-01
In telemedicine while transferring medical images tampers may be introduced. Before making any diagnostic decisions, the integrity of region of interest (ROI) of the received medical image must be verified to avoid misdiagnosis. In this paper, we propose a novel fragile block based medical image watermarking technique to avoid embedding distortion inside ROI, verify integrity of ROI, detect accurately the tampered blocks inside ROI, and recover the original ROI with zero loss. In this proposed method, the medical image is segmented into three sets of pixels: ROI pixels, region of noninterest (RONI) pixels, and border pixels. Then, authentication data and information of ROI are embedded in border pixels. Recovery data of ROI is embedded into RONI. Results of experiments conducted on a number of medical images reveal that the proposed method produces high quality watermarked medical images, identifies the presence of tampers inside ROI with 100% accuracy, and recovers the original ROI without any loss.
PIVlab – Towards User-friendly, Affordable and Accurate Digital Particle Image Velocimetry in MATLAB
Stamhuis, Eize; Thielicke, William
2014-01-01
Digital particle image velocimetry (DPIV) is a non-intrusive analysis technique that is very popular for mapping flows quantitatively. To get accurate results, in particular in complex flow fields, a number of challenges have to be faced and solved: The quality of the flow measurements is affected b
PIVlab – Towards User-friendly, Affordable and Accurate Digital Particle Image Velocimetry in MATLAB
Stamhuis, Eize; Thielicke, William
2014-01-01
Digital particle image velocimetry (DPIV) is a non-intrusive analysis technique that is very popular for mapping flows quantitatively. To get accurate results, in particular in complex flow fields, a number of challenges have to be faced and solved: The quality of the flow measurements is affected
Spectral image reconstruction by a tunable LED illumination
Lin, Meng-Chieh; Tsai, Chen-Wei; Tien, Chung-Hao
2013-09-01
Spectral reflectance estimation of an object via low-dimensional snapshot requires both image acquisition and a post numerical estimation analysis. In this study, we set up a system incorporating a homemade cluster of LEDs with spectral modulation for scene illumination, and a multi-channel CCD to acquire multichannel images by means of fully digital process. Principal component analysis (PCA) and pseudo inverse transformation were used to reconstruct the spectral reflectance in a constrained training set, such as Munsell and Macbeth Color Checker. The average reflectance spectral RMS error from 34 patches of a standard color checker were 0.234. The purpose is to investigate the use of system in conjunction with the imaging analysis for industry or medical inspection in a fast and acceptable accuracy, where the approach was preliminary validated.
A generalized Fourier penalty in prior-image-based reconstruction for cross-platform imaging
Pourmorteza, A.; Siewerdsen, J. H.; Stayman, J. W.
2016-03-01
Sequential CT studies present an excellent opportunity to apply prior-image-based reconstruction (PIBR) methods that leverage high-fidelity prior imaging studies to improve image quality and/or reduce x-ray exposure in subsequent studies. One major obstacle in using PIBR is that the initial and subsequent studies are often performed on different scanners (e.g. diagnostic CT followed by CBCT for interventional guidance); this results in mismatch in attenuation values due to hardware and software differences. While improved artifact correction techniques can potentially mitigate such differences, the correction is often incomplete. Here, we present an alternate strategy where the PIBR itself is used to mitigate these differences. We define a new penalty for the previously introduced PIBR called Reconstruction of Difference (RoD). RoD differs from many other PIBRs in that it reconstructs only changes in the anatomy (vs. reconstructing the current anatomy). Direct regularization of the difference image in RoD provides an opportunity to selectively penalize spatial frequencies of the difference image (e.g. low frequency differences associated with attenuation offsets and shading artifacts) without interfering with the variations in unchanged background image. We leverage this flexibility and introduce a novel regularization strategy using a generalized Fourier penalty within the RoD framework and develop the modified reconstruction algorithm. We evaluate the performance of the new approach in both simulation studies and in physical CBCT test-bench data. We find that generalized Fourier penalty can be highly effective in reducing low-frequency x-ray artifacts through selective suppression of spatial frequencies in the reconstructed difference image.
Statistics-based reconstruction method with high random-error tolerance for integral imaging.
Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing
2015-10-01
A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.
A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm
Lei, Wang; Xiaotong, Du; Xiaoyin, Shao
2007-07-01
Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.
A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm
Wang Lei [School of Control Science and Engineering, Shandong University, 250061, Jinan (China); Du Xiaotong [School of Control Science and Engineering, Shandong University, 250061, Jinan (China); Shao Xiaoyin [Department of Manufacture Engineering and Engineering Management, City University of Hong Kong (China)
2007-07-15
Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.
Dynamic relaxation in algebraic reconstruction technique (ART) for breast tomosynthesis imaging.
Oliveira, N; Mota, A M; Matela, N; Janeiro, L; Almeida, P
2016-08-01
A major challenge in Digital Breast Tomosynthesis (DBT) is handling image noise since the 3D reconstructed images are obtained from low dose projections and limited angular range. The use of the iterative reconstruction algorithm Algebraic Reconstruction Technique (ART) in clinical context depends on two key factors: the number of iterations needed (time consuming) and the image noise after iterations. Both factors depend highly on a relaxation coefficient (λ), which may give rise to slow or noisy reconstructions, when a single λ value is considered for the entire iterative process. The aim of this work is to present a new implementation for the ART that takes into account a dynamic mode to calculate λ in DBT image reconstruction. A set of initial reconstructions of real phantom data was done using constant λ values. The results were used to choose, for each iteration, the suitable λ value, taking into account the image noise level and the convergence speed. A methodology to optimize λ automatically during the image reconstruction was proposed. Results showed we can dynamically choose λ values in such a way that the time needed to reconstruct the images can be significantly reduced (up to 70%) while achieving similar image quality. These results were confirmed with one clinical dataset. With simple methodology we were able to dynamically choose λ in DBT image reconstruction with ART, allowing a shorter image reconstruction time without increasing image noise. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cardiac motion correction based on partial angle reconstructed images in x-ray CT
Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr [Department of Electrical Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)
2015-05-15
Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogram with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of
LIRA: Low-Count Image Reconstruction and Analysis
Stein, Nathan; van Dyk, David; Connors, Alanna; Siemiginowska, Aneta; Kashyap, Vinay
2009-09-01
LIRA is a new software package for the R statistical computing language. The package is designed for multi-scale non-parametric image analysis for use in high-energy astrophysics. The code implements an MCMC sampler that simultaneously fits the image and the necessary tuning/smoothing parameters in the model (an advance from `EMC2' of Esch et al. 2004). The model-based approach allows for quantification of the standard error of the fitted image and can be used to access the statistical significance of features in the image or to evaluate the goodness-of-fit of a proposed model. The method does not rely on Gaussian approximations, instead modeling image counts as Poisson data, making it suitable for images with extremely low counts. LIRA can include a null (or background) model and fit the departure between the observed data and the null model via a wavelet-like multi-scale component. The technique is therefore suited for problems in which some aspect of an observation is well understood (e.g, a point source), but questions remain about observed departures. To quantitatively test for the presence of diffuse structure unaccounted for by a point source null model, first, the observed image is fit with the null model. Second, multiple simulated images, generated as Poisson realizations of the point source model, are fit using the same null model. MCMC samples from the posterior distributions of the parameters of the fitted models can be compared and can be used to calibrate the misfit between the observed data and the null model. Additionally, output from LIRA includes the MCMC draws of the multi-scale component images, so that the departure of the (simulated or observed) data from the point source null model can be examined visually. To demonstrate LIRA, an example of reconstructing Chandra images of high redshift quasars with jets is presented.
Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas
2014-01-01
provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves......Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also...
Local Surface Reconstruction from MER images using Stereo Workstation
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL
Reconstruction method for x-ray imaging capsule
Rubin, Daniel; Lifshitz, Ronen; Bar-Ilan, Omer; Weiss, Noam; Shapiro, Yoel; Kimchy, Yoav
2017-03-01
A colon imaging capsule has been developed by Check-Cap Ltd (C-Scan® Cap). For the procedure, the patient swallows a small amount of standard iodinated contrast agent. To create images, three rotating X-ray beams are emitted towards the colon wall. Part of the X-ray photons are backscattered from the contrast medium and the colon. These photons are collected by an omnidirectional array of energy discriminating photon counting detectors (CdTe/CZT) within the capsule. X-ray fluorescence (XRF) and Compton backscattering photons pertain different energies and are counted separately by the detection electronics. The current work examines a new statistical approach for the algorithm that reconstructs the lining of the colon wall from the X-ray detector readings. The algorithm performs numerical optimization for finding the solution to the inverse problem applied to a physical forward model, reflecting the behavior of the system. The forward model that was employed, accounts for the following major factors: the two mechanisms of dependence between the distance to the colon wall and the number photons, directional scatter distributions, and relative orientations between beams and detectors. A calibration procedure has been put in place to adjust the coefficients of the forward model for the specific capsule geometry, radiation source characteristics, and the detector response. The performance of the algorithm was examined in phantom experiments and demonstrated high correlation between actual phantom shape and x-ray image reconstruction. Evaluation is underway to assess the algorithm performance in clinical setting.
Filtered gradient reconstruction algorithm for compressive spectral imaging
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Reconstructing the open-field magnetic geometry of solar corona using coronagraph images
Uritsky, Vadim M.; Davila, Joseph M.; Jones, Shaela; Burkepile, Joan
2015-04-01
The upcoming Solar Probe Plus and Solar Orbiter missions will provide an new insight into the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Physical interpretation of these observations will be dependent on the accurate reconstruction of the large-scale coronal magnetic field. We argue that such reconstruction can be performed using photospheric extrapolation codes constrained by white-light coronagraph images. The field extrapolation component of this project is featured in a related presentation by S. Jones et al. Here, we focus on our image-processing algorithms conducting an automated segmentation of coronal loop structures. In contrast to the previously proposed segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, our technique focuses on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. Coronagraph images are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction followed by an adaptive angular differentiation. An adjustable threshold is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to identify valid features against a noisy background. The extracted coronal features are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms. Two versions of the method optimized for processing ground-based (Mauna Loa Solar Observatory) and satellite-based (STEREO Cor1 and Cor2) coronagraph images are being developed.
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.
Deformable Surface 3D Reconstruction from Monocular Images
Salzmann, Matthieu
2010-01-01
Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig
Improved proton computed tomography by dual modality image reconstruction
Hansen, David Christoffer; Bassler, Niels; Petersen, Jørgen B.B.;
2014-01-01
Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full...... nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully...
Data reconstructed of ultraviolet spatially modulated imaging spectrometer
Yuan, Xiaochun; Yu, Chunchao; Yang, Zhixiong; Yan, Min; Zeng, Yi
2016-10-01
With the advantages of fluorescence excitation and environmental adaptability simultaneously, Ultraviolet Image Spectroscopy has shown irreplaceable features in the field of latent target detection and become a current research focus. A design of Large Aperture Ultraviolet Spatially Modulated Imaging Spectrometer (LAUV-SMIS) based on image plane interferometer and offner system was first proposed in this paper. The data processing technology of time-spatial modulation FTIS in UV band has been studied. The latent fingerprint could be recognized clearly from the image since which is capable to meet the need of latent target detection. The spectral curve of the target could distinguish the emission peak at 253.7nm and 365nm when the low pressure and high pressure mercury lamp were used as the illuminator. Accurate spectral data of the target can be collected on the short and long wave ends of the working band.
Transaxial system models for jPET-D4 image reconstruction
Yamaya, Taiga; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki; Kitamura, Keishi; Hasegawa, Tomoyuki; Haneishi, Hideaki; Yoshida, Eiji; Inadama, Naoko; Murayama, Hideo
2005-11-01
A high-performance brain PET scanner, jPET-D4, which provides four-layer depth-of-interaction (DOI) information, is being developed to achieve not only high spatial resolution, but also high scanner sensitivity. One technical issue to be dealt with is the data dimensions which increase in proportion to the square of the number of DOI layers. It is, therefore, difficult to apply algebraic or statistical image reconstruction methods directly to DOI-PET, though they improve image quality through accurate system modelling. The process that requires the most computational time and storage space is the calculation of the huge number of system matrix elements. The DOI compression (DOIC) method, which we have previously proposed, reduces data dimensions by a factor of 1/5. In this paper, we propose a transaxial imaging system model optimized for jPET-D4 with the DOIC method. The proposed model assumes that detector response functions (DRFs) are uniform along line-of-responses (LORs). Then each element of the system matrix is calculated as the summed intersection lengths between a pixel and sub-LORs weighted by a value from the DRF look-up-table. 2D numerical simulation results showed that the proposed model cut the calculation time by a factor of several hundred while keeping image quality, compared with the accurate system model. A 3D image reconstruction with the on-the-fly calculation of the system matrix is within the practical limitations by incorporating the proposed model and the DOIC method with one-pass accelerated iterative methods.
Zhang, Xuezhu; Zhou, Jian; Cherry, Simon R; Badawi, Ramsey D; Qi, Jinyi
2017-03-21
The EXPLORER project aims to build a 2 meter long total-body PET scanner, which will provide extremely high sensitivity for imaging the entire human body. It will possess a range of capabilities currently unavailable to state-of-the-art clinical PET scanners with a limited axial field-of-view. The huge number of lines-of-response (LORs) of the EXPLORER poses a challenge to the data handling and image reconstruction. The objective of this study is to develop a quantitative image reconstruction method for the EXPLORER and compare its performance with current whole-body scanners. Fully 3D image reconstruction was performed using time-of-flight list-mode data with parallel computation. To recover the resolution loss caused by the parallax error between crystal pairs at a large axial ring difference or transaxial radial offset, we applied an image domain resolution model estimated from point source data. To evaluate the image quality, we conducted computer simulations using the SimSET Monte-Carlo toolkit and XCAT 2.0 anthropomorphic phantom to mimic a 20 min whole-body PET scan with an injection of 25 MBq (18)F-FDG. We compare the performance of the EXPLORER with a current clinical scanner that has an axial FOV of 22 cm. The comparison results demonstrated superior image quality from the EXPLORER with a 6.9-fold reduction in noise standard deviation comparing with multi-bed imaging using the clinical scanner.
A validated methodology for the 3D reconstruction of cochlea geometries using human microCT images
Sakellarios, A. I.; Tachos, N. S.; Rigas, G.; Bibas, T.; Ni, G.; Böhnke, F.; Fotiadis, D. I.
2017-05-01
Accurate reconstruction of the inner ear is a prerequisite for the modelling and understanding of the inner ear mechanics. In this study, we present a semi-automated methodology for accurate reconstruction of the major inner ear structures (scalae, basilar membrane, stapes and semicircular canals). For this purpose, high resolution microCT images of a human specimen were used. The segmentation methodology is based on an iterative level set algorithm which provides the borders of the structures of interest. An enhanced coupled level set method which allows the simultaneous multiple image labeling without any overlapping regions has been developed for this purpose. The marching cube algorithm was applied in order to extract the surface from the segmented volume. The reconstructed geometries are then post-processed to improve the basilar membrane geometry to realistically represent physiologic dimensions. The final reconstructed model is compared to the available data from the literature. The results show that our generated inner ear structures are in good agreement with the published ones, while our approach is the most realistic in terms of the basilar membrane thickness and width reconstruction.
Holographic microscopy reconstruction in both object and image half spaces with undistorted 3D grid
Verrier, Nicolas; Tessier, Gilles; Gross, Michel
2015-01-01
We propose a holographic microscopy reconstruction method, which propagates the hologram, in the object half space, in the vicinity of the object. The calibration yields reconstructions with an undistorted reconstruction grid i.e. with orthogonal x, y and z axis and constant pixels pitch. The method is validated with an USAF target imaged by a x60 microscope objective, whose holograms are recorded and reconstructed for different USAF locations along the longitudinal axis:-75 to +75 {\\mu}m. Since the reconstruction numerical phase mask, the reference phase curvature and MO form an afocal device, the reconstruction can be interpreted as occurring equivalently in the object or in image half space.
Reconstruction algorithm medical imaging DRR; Algoritmo de construccion de imagenes medicas DRR
Estrada Espinosa, J. C.
2013-07-01
The method of reconstruction for digital radiographic Imaging (DRR), is based on two orthogonal images, on the dorsal and lateral decubitus position of the simulation. DRR images are reconstructed with an algorithm that simulates running a conventional X-ray, a single rendition team, beam emitted is not divergent, in this case, the rays are considered to be parallel in the image reconstruction DRR, for this purpose, it is necessary to use all the values of the units (HU) hounsfield of each voxel in all axial cuts that form the study TC, finally obtaining the reconstructed image DRR performing a transformation from 3D to 2D. (Author)
Gui-Song Xia
2015-11-01
Full Text Available It is a challenging problem to efficiently interpret the large volumes of remotely sensed image data being collected in the current age of remote sensing “big data”. Although human visual interpretation can yield accurate annotation of remote sensing images, it demands considerable expert knowledge and is always time-consuming, which strongly hinders its efficiency. Alternatively, intelligent approaches (e.g., supervised classification and unsupervised clustering can speed up the annotation process through the application of advanced image analysis and data mining technologies. However, high-quality expert-annotated samples are still a prerequisite for intelligent approaches to achieve accurate results. Thus, how to efficiently annotate remote sensing images with little expert knowledge is an important and inevitable problem. To address this issue, this paper introduces a novel active clustering method for the annotation of high-resolution remote sensing images. More precisely, given a set of remote sensing images, we first build a graph based on these images and then gradually optimize the structure of the graph using a cut-collect process, which relies on a graph-based spectral clustering algorithm and pairwise constraints that are incrementally added via active learning. The pairwise constraints are simply similarity/dissimilarity relationships between the most uncertain pairwise nodes on the graph, which can be easily determined by non-expert human oracles. Furthermore, we also propose a strategy to adaptively update the number of classes in the clustering algorithm. In contrast with existing methods, our approach can achieve high accuracy in the task of remote sensing image annotation with relatively little expert knowledge, thereby greatly lightening the workload burden and reducing the requirements regarding expert knowledge. Experiments on several datasets of remote sensing images show that our algorithm achieves state
Cui, Xiaoming, E-mail: mmayzy2008@126.com; Li, Tao, E-mail: litaofeivip@163.com; Li, Xin, E-mail: lx0803@sina.com.cn; Zhou, Weihua, E-mail: wangxue0606@gmail.com
2015-05-15
Highlights: • High-resolution scan mode is appropriate for imaging coronary stent. • HD-detail reconstruction algorithm is stent-dedicated kernel. • The intrastent lumen visibility also depends on stent diameter and material. - Abstract: Objective: The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Materials and methods: Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. Results: The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0 ± 8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Conclusions: Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement.
Guo, En-Yu [Key Laboratory for Advanced Materials Processing Technology, School of Materials Science and Engineering, Tsinghua University, Beijing 100084 (China); Materials Science and Engineering, School for Engineering of Matter, Transport, and Energy, Arizona State University, Tempe, AZ 85287 (United States); Chawla, Nikhilesh [Materials Science and Engineering, School for Engineering of Matter, Transport, and Energy, Arizona State University, Tempe, AZ 85287 (United States); Jing, Tao [Key Laboratory for Advanced Materials Processing Technology, School of Materials Science and Engineering, Tsinghua University, Beijing 100084 (China); Torquato, Salvatore [Department of Chemistry, Princeton University, Princeton, NJ 08544 (United States); Department of Physics, Princeton University, Princeton, NJ 08544 (United States); Princeton Institute for the Science and Technology of Materials, Princeton University, Princeton, NJ 08544 (United States); Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544 (United States); Jiao, Yang, E-mail: yang.jiao.2@asu.edu [Materials Science and Engineering, School for Engineering of Matter, Transport, and Energy, Arizona State University, Tempe, AZ 85287 (United States)
2014-03-01
Heterogeneous materials are ubiquitous in nature and synthetic situations and have a wide range of important engineering applications. Accurate modeling and reconstructing three-dimensional (3D) microstructure of topologically complex materials from limited morphological information such as a two-dimensional (2D) micrograph is crucial to the assessment and prediction of effective material properties and performance under extreme conditions. Here, we extend a recently developed dilation–erosion method and employ the Yeong–Torquato stochastic reconstruction procedure to model and generate 3D austenitic–ferritic cast duplex stainless steel microstructure containing percolating filamentary ferrite phase from 2D optical micrographs of the material sample. Specifically, the ferrite phase is dilated to produce a modified target 2D microstructure and the resulting 3D reconstruction is eroded to recover the percolating ferrite filaments. The dilation–erosion reconstruction is compared with the actual 3D microstructure, obtained from serial sectioning (polishing), as well as the standard stochastic reconstructions incorporating topological connectedness information. The fact that the former can achieve the same level of accuracy as the latter suggests that the dilation–erosion procedure is tantamount to incorporating appreciably more topological and geometrical information into the reconstruction while being much more computationally efficient. - Highlights: • Spatial correlation functions used to characterize filamentary ferrite phase • Clustering information assessed from 3D experimental structure via serial sectioning • Stochastic reconstruction used to generate 3D virtual structure 2D micrograph • Dilation–erosion method to improve accuracy of 3D reconstruction.
4D reconstruction of the past: the image retrieval and 3D model construction pipeline
Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro
2014-08-01
One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.
3D reconstruction of concave surfaces using polarisation imaging
Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.
2015-06-01
This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.
Image reconstruction with acoustic radiation force induced shear waves
McAleavey, Stephen A.; Nightingale, Kathryn R.; Stutz, Deborah L.; Hsu, Stephen J.; Trahey, Gregg E.
2003-05-01
Acoustic radiation force may be used to induce localized displacements within tissue. This phenomenon is used in Acoustic Radiation Force Impulse Imaging (ARFI), where short bursts of ultrasound deliver an impulsive force to a small region. The application of this transient force launches shear waves which propagate normally to the ultrasound beam axis. Measurements of the displacements induced by the propagating shear wave allow reconstruction of the local shear modulus, by wave tracking and inversion techniques. Here we present in vitro, ex vivo and in vivo measurements and images of shear modulus. Data were obtained with a single transducer, a conventional ultrasound scanner and specialized pulse sequences. Young's modulus values of 4 kPa, 13 kPa and 14 kPa were observed for fat, breast fibroadenoma, and skin. Shear modulus anisotropy in beef muscle was observed.
Reconstruction of mechanically recorded sound by image processing
Fadeyev, Vitaliy; Haber, Carl
2003-03-26
Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with no or minimal contact, by measuring the groove shape using precision metrology methods and digital image processing. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Various aspects of this approach are discussed. A feasibility test is reported which used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record. Comparisons are presented with stylus playback of the record and with a digitally re-mastered version of the original magnetic recording. A more extensive implementation of this approach, with dedicated hardware and software, is considered.
Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition
Chaudhury, Ayan; Manna, Sumita; Mukherjee, Subhadeep; Chakrabarti, Amlan
2010-01-01
Calibration in a multi camera network has widely been studied for over several years starting from the earlier days of photogrammetry. Many authors have presented several calibration algorithms with their relative advantages and disadvantages. In a stereovision system, multiple view reconstruction is a challenging task. However, the total computational procedure in detail has not been presented before. Here in this work, we are dealing with the problem that, when a world coordinate point is fixed in space, image coordinates of that 3D point vary for different camera positions and orientations. In computer vision aspect, this situation is undesirable. That is, the system has to be designed in such a way that image coordinate of the world coordinate point will be fixed irrespective of the position & orientation of the cameras. We have done it in an elegant fashion. Firstly, camera parameters are calculated in its local coordinate system. Then, we use global coordinate data to transfer all local coordinate d...
An Optimized Method for Terrain Reconstruction Based on Descent Images
Xu Xinchao
2016-02-01
Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.
钟志威
2016-01-01
针对稀疏角度投影数据CT图像重建问题,TV-ART算法将图像的梯度稀疏先验知识引入代数重建法( ART)中,对分段平滑的图像具有较好的重建效果。但是,该算法在边界重建时会产生阶梯效应,影响重建质量。因此,本文提出自适应核回归函数结合代数重建法的重建算法( LAKR-ART),不仅在边界重建时不会产生阶梯效应,而且对细节纹理重建具有更好的重建效果。最后对shepp-logan标准CT图像和实际CT头颅图像进行仿真实验,并与ART、TV-ART算法进行比较,实验结果表明本文算法有效。%To the problem of sparse angular projection data of CT image reconstruction, TV-ART algorithm introduces the gradient sparse prior knowledge of image to algebraic reconstruction, and the local smooth image gets a better reconstruction effect. How-ever, the algorithm generates step effect when the borders are reconstructed, affecting the quality of the reconstruction. Therefore, this paper proposes an adaptive kernel regression function combined with Algebraic Reconstruction Technique reconstruction algo-rithm ( LAKR-ART) , it does not produce the step effect on the border reconstruction, and has a better effect to detail reconstruc-tion. Finally we use the shepp-logan CT image and the actual CT image to make the simulation experiment, and compare with ART and TV-ART algorithm. The experimental results show the algorithm is of effectiveness.
Robust Compressive Phase Retrieval via L1 Minimization With Application to Image Reconstruction
Yang, Zai; Xie, Lihua
2013-01-01
Phase retrieval refers to a classical nonconvex problem of recovering a signal from its Fourier magnitude measurements. Inspired by the compressed sensing technique, signal sparsity is exploited in recent studies of phase retrieval to reduce the required number of measurements, known as compressive phase retrieval (CPR). In this paper, l1 minimization problems are formulated for CPR to exploit the signal sparsity and alternating direction algorithms are presented for problem solving. For real-valued, nonnegative image reconstruction, the image of interest is shown to be an optimal solution of the formulated l1 minimization in the noise free case. Numerical simulations demonstrate that the proposed approach is fast, accurate and robust to measurements noises.
Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru
2012-01-01
Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.
3D Image Reconstruction from X-Ray Measurements with Overlap
Klodt, Maria
2016-01-01
3D image reconstruction from a set of X-ray projections is an important image reconstruction problem, with applications in medical imaging, industrial inspection and airport security. The innovation of X-ray emitter arrays allows for a novel type of X-ray scanners with multiple simultaneously emitting sources. However, two or more sources emitting at the same time can yield measurements from overlapping rays, imposing a new type of image reconstruction problem based on nonlinear constraints. Using traditional linear reconstruction methods, respective scanner geometries have to be implemented such that no rays overlap, which severely restricts the scanner design. We derive a new type of 3D image reconstruction model with nonlinear constraints, based on measurements with overlapping X-rays. Further, we show that the arising optimization problem is partially convex, and present an algorithm to solve it. Experiments show highly improved image reconstruction results from both simulated and real-world measurements.
3D Dose reconstruction: Banding artefacts in cine mode EPID images during VMAT delivery
Woodruff, H. C.; Greer, P. B.
2013-06-01
Cine (continuous) mode images obtained during VMAT delivery are heavily degraded by banding artefacts. We have developed a method to reconstruct the pulse sequence (and hence dose deposited) from open field images. For clinical VMAT fields we have devised a frame averaging strategy that greatly improves image quality and dosimetric information for three-dimensional dose reconstruction.
High-resolution Image Reconstruction by Neural Network and Its Application in Infrared Imaging
ZHANG Nan; JIN Wei-qi; SU Bing-hua
2005-01-01
As digital image techniques have been widely used, the requirements for high-resolution images become increasingly stringent. Traditional single-frame interpolation techniques cannot add new high frequency information to the expanded images, and cannot improve resolution in deed. Multiframe-based techniques are effective ways for high-resolution image reconstruction, but their computation complexities and the difficulties in achieving image sequences limit their applications. An original method using an artificial neural network is proposed in this paper. Using the inherent merits in neural network, we can establish the mapping between high frequency components in low-resolution images and high-resolution images. Example applications and their results demonstrated the images reconstructed by our method are aesthetically and quantitatively (using the criteria of MSE and MAE) superior to the images acquired by common methods. Even for infrared images this method can give satisfactory results with high definition. In addition, a single-layer linear neural network is used in this paper, the computational complexity is very low, and this method can be realized in real time.
Liu, Jiulong; Ding, Huanjun; Molloi, Sabee; Zhang, Xiaoqun; Gao, Hao
2016-12-01
This work develops a material reconstruction method for spectral CT, namely Total Image Constrained Material Reconstruction (TICMR), to maximize the utility of projection data in terms of both spectral information and high signal-to-noise ratio (SNR). This is motivated by the following fact: when viewed as a spectrally-integrated measurement, the projection data can be used to reconstruct a total image without spectral information, which however has a relatively high SNR; when viewed as a spectrally-resolved measurement, the projection data can be utilized to reconstruct the material composition, which however has a relatively low SNR. The material reconstruction synergizes material decomposition and image reconstruction, i.e., the direct reconstruction of material compositions instead of a two-step procedure that first reconstructs images and then decomposes images. For material reconstruction with high SNR, we propose TICMR with nonlocal total variation (NLTV) regularization. That is, first we reconstruct a total image using spectrally-integrated measurement without spectral binning, and build the NLTV weights from this image that characterize nonlocal image features; then the NLTV weights are incorporated into a NLTV-based iterative material reconstruction scheme using spectrally-binned projection data, so that these weights serve as a high-SNR reference to regularize material reconstruction. Note that the nonlocal property of NLTV is essential for material reconstruction, since material compositions may have significant local intensity variations although their structural information is often similar. In terms of solution algorithm, TICMR is formulated as an iterative reconstruction method with the NLTV regularization, in which the nonlocal divergence is utilized based on the adjoint relationship. The alternating direction method of multipliers is developed to solve this sparsity optimization problem. The proposed TICMR method was validated using both simulated
Das, Marco; Muehlenbruch, Georg; Mahnken, Andreas Horst; Guenther, Rolf W.; Wildberger, Joachim Ernst [University Hospital, University of Technology (RWTH), Department of Diagnostic Radiology, Aachen (Germany); Weiss, Claudia [RWTH Aachen, Institute of Medical Statistics, Aachen (Germany); Schoepf, U. Joseph [Medical University of South Carolina, Department of Radiology, Charleston, SC (United States); Leidecker, Christianne [Institute of Medical Physics, University of Erlangen, Erlangen (Germany)
2006-02-01
The aims of this study were to optimize image quality for indirect CT venography (sequential versus spiral), and to evaluate different image reconstruction parameters for patients with suspected deep venous thrombosis (DVT). Fifty-one patients (26/25 with/without DVT) were prospectively evaluated for pulmonary embolism (PE) with standard multidetector-row computed tomography (MDCT) protocols. Retrospective image reconstruction was done with different slice thicknesses and reconstruction increments in sequential and spiral modes. All reconstructions were read for depiction of DVT and to evaluate best reconstruction parameters in comparison with the thinnest reconstruction (''gold standard''). Image noise and venous enhancement were measured as objective criteria for image quality. Subjective image quality was rated on a four-point scale. Effective dose was estimated for all reconstructions. In sequential 10/50 reconstruction DVT was completely detected in 13/26 cases, partially in 10/26 cases and was not detected at all in 3/26 cases, and 15/26, 9/26 and 2/26 cases for the 10/20 reconstruction, respectively. DVT was completely detected in all spiral reconstructions. Image noise ranged between 14.8-29.1 HU. Median image quality was 2. Estimated effective dose ranged between 2.3 mSv and 11.8 mSv. Gaps in sequential protocols may lead to false negative results. Therefore, spiral scanning protocols for complete depiction of DVT are mandatory. (orig.)
Luís F Seoane
2015-04-01
Full Text Available We provide a proof of concept for an EEG-based reconstruction of a visual image which is on a user's mind. Our approach is based on the Rapid Serial Visual Presentation (RSVP of polygon primitives and Brain-Computer Interface (BCI technology. In an experimental setup, subjects were presented bursts of polygons: some of them contributed to building a target image (because they matched the shape and/or color of the target while some of them did not. The presentation of the contributing polygons triggered attention-related EEG patterns. These Event Related Potentials (ERPs could be determined using BCI classification and could be matched to the stimuli that elicited them. These stimuli (i.e. the ERP-correlated polygons were accumulated in the display until a satisfactory reconstruction of the target image was reached. As more polygons were accumulated, finer visual details were attained resulting in more challenging classification tasks. In our experiments, we observe an average classification accuracy of around 75%. An in-depth investigation suggests that many of the misclassifications were not misinterpretations of the BCI concerning the users' intent, but rather caused by ambiguous polygons that could contribute to reconstruct several different images. When we put our BCI-image reconstruction in perspective with other RSVP BCI paradigms, there is large room for improvement both in speed and accuracy. These results invite us to be optimistic. They open a plethora of possibilities to explore non-invasive BCIs for image reconstruction both in healthy and impaired subjects and, accordingly, suggest interesting recreational and clinical applications.
O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.
2014-05-01
Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.
GUO Qiang; YANG Xin
2006-01-01
A statistical algorithm for the reconstruction from time sequence echocardiographic images is proposed in this paper.The ability to jointly restore the images and reconstruct the 3D images without blurring the boundary is the main innovation of this algorithm. First, a Bayesian model based on MAP-MRF is used to reconstruct 3D volume, and extended to deal with the images acquired by rotation scanning method. Then, the spatiotemporal nature of ultrasound images is taken into account for the parameter of energy function, which makes this statistical model anisotropic. Hence not only can this method reconstruct 3D ultrasound images, but also remove the speckle noise anisotropically. Finally, we illustrate the experiments of our method on the synthetic and medical images and compare it with the isotropic reconstruction method.
Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang
2015-01-01
Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.
Su, Kuan-Hao; Yen, Tzu-Chen; Fang, Yu-Hua Dean
2013-10-01
The aim of this study is to develop and evaluate a novel direct reconstruction method to improve the signal-to-noise ratio (SNR) of parametric images in dynamic positron-emission tomography (PET), especially for applications in myocardial perfusion studies. Simulation studies were used to test the performance in SNR and computational efficiency for different methods. The NCAT phantom was used to generate simulated dynamic data. Noise realization was performed in the sinogram domain and repeated for 30 times with four different noise levels by varying the injection dose (ID) from standard ID to 1/8 of it. The parametric images were calculated by (1) three direct methods that compute the kinetic parameters from the sinogram and (2) an indirect method, which computes the kinetic parameter with pixel-by-pixel curve fitting in image space using weighted least-squares. The first direct reconstruction maximizes the likelihood function using trust-region-reflective (TRR) algorithm. The second approach uses tabulated parameter sets to generate precomputed time-activity curves for maximizing the likelihood functions. The third approach, as a newly proposed method, assumes separable complete data to derive the M-step for maximizing the likelihood. The proposed method with the separable complete data performs similarly to the other two direct reconstruction methods in terms of the SNR, providing a 5%-10% improvement as compared to the indirect parametric reconstruction under the standard ID. The improvement of SNR becomes more obvious as the noise level increases, reaching more than 30% improvement under 1/8 ID. Advantage of the proposed method lies in the computation efficiency by shortening the time requirement to 25% of the indirect approach and 3%-6% of other direct reconstruction methods. With results provided from this simulation study, direct reconstruction of myocardial blood flow shows a high potential for improving the parametric image quality for clinical use.
A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image
Chengyu Guo
2016-02-01
Full Text Available Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach.
A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image
Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping
2016-01-01
Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289
PIVlab – Towards User-friendly, Affordable and Accurate Digital Particle Image Velocimetry in MATLAB
William Thielicke
2014-10-01
Full Text Available Digital particle image velocimetry (DPIV is a non-intrusive analysis technique that is very popular for mapping flows quantitatively. To get accurate results, in particular in complex flow fields, a number of challenges have to be faced and solved: The quality of the flow measurements is affected by computational details such as image pre-conditioning, sub-pixel peak estimators, data validation procedures, interpolation algorithms and smoothing methods. The accuracy of several algorithms was determined and the best performing methods were implemented in a user-friendly open-source tool for performing DPIV flow analysis in Matlab.
Meaney, Paul M.; Fanning, Margaret W.; Li, Dun; Fang, Qianqian; Pendergrass, Sarah; Paulsen, Keith D.
2003-06-01
Microwave imaging has been investigated as a method of non-invasively estimating tissue electrical properties especially the conductivity, which is highly temperature dependent, as a means of monitoring thermal therapy. The technique we have chosen utilizes an iterative Gauss-Newton approach to converge on the correct property distribution. A previous implementation utilizing the complex form (CF) of the electric fields along with a sub-optimal phantom experimental configuration resulted in imaging temperature accuracy of only 1.6°C. Applying the log-magnitude/phase form (LMPF) of the algorithm has resulted in imaging accuracy on the order of 0.3°C which is a significant advance for the area of treatment monitoring. The LMPF algorithm was originally introduced as a way to reconstruct images of large, high-contrast scatterers as is the case in breast imaging. However, recent analysis of the Jacobian matrices for the comparable implementations has shown that the reconstruction problem in the new formulation more closely resembles a linear task as is the case in x-ray computed tomography. The comparisons were performed by examining plots of the Jacobian matrix terms for fixed transmit and receive antennas which demonstrated higher sensitivity in the center of the imaging zone along with narrower paths of senstivity between the atnenna pair for the LMPF algorithm. Animal model experiments have also been performed to validate these capabilities in a more realistic setting. Finally, the overall computational efficiency has been significantly enhanced through the use of the adjoint image reconstruction approach. This enables us to reconstruct images in roughly one minute which is essential if the approach is to be used as a therapy feedback mechanism.
Accurate color synthesis of three-dimensional objects in an image
Xin, John H.; Shen, Hui-Liang
2004-05-01
Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.
Isotope specific resolution recovery image reconstruction in high resolution PET imaging
Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib
Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to
Reza Saadat Mostafavi
2010-05-01
Full Text Available Background/Objective: The presence of liver volume has a great effect on diagnosis and management of different diseases such as lymphoproliferative conditions. "nPatients and Methods: Abdominal CT scan of 100 patients without any findings for liver disease (in history and imaging was subjected to volumetry and reconstruction. Along with the liver volume, in axial series, the AP diameter of the left lobe (in midline and right lobe (mid-clavicular and lateral maximum diameter of the liver in the mid-axiliary line and maximum diameter to IVC were calculated. In the coronal mid-axillary and sagittal mid-clavicular plane, maximum superior-inferior dimensions were calculated with their various combinations (multiplying. Regression analysis between dimensions and volume were performed. "nResults: The most accurate combination was the superior inferior sagittal dimension multiplied by AP diameter of the right lobe (R squared 0.78, P-value<0.001 and the most solitary dimension was the lateral dimension to IVC in the axial plane (R squared 0.57, P-value<0.001 with an interval of 9-11cm for 68% of normal. "nConclusion: We recommend the lateral maximum diameter of liver from surface to IVC in the axial plane in ultrasound for liver volume prediction with an interval of 9-11cm for 68% of normal. Out of this range is regarded as abnormal.
Electron Trajectory Reconstruction for Advanced Compton Imaging of Gamma Rays
Plimley, Brian Christopher
Gamma-ray imaging is useful for detecting, characterizing, and localizing sources in a variety of fields, including nuclear physics, security, nuclear accident response, nuclear medicine, and astronomy. Compton imaging in particular provides sensitivity to weak sources and good angular resolution in a large field of view. However, the photon origin in a single event sequence is normally only limited to the surface of a cone. If the initial direction of the Compton-scattered electron can be measured, the cone can be reduced to a cone segment with width depending on the uncertainty in the direction measurement, providing a corresponding increase in imaging sensitivity. Measurement of the electron's initial direction in an efficient detection material requires very fine position resolution due to the electron's short range and tortuous path. A thick (650 mum), fully-depleted charge-coupled device (CCD) developed for infrared astronomy has 10.5-mum position resolution in two dimensions, enabling the initial trajectory measurement of electrons of energy as low as 100 keV. This is the first time the initial trajectories of electrons of such low energies have been measured in a solid material. In this work, the CCD's efficacy as a gamma-ray detector is demonstrated experimentally, using a reconstruction algorithm to measure the initial electron direction from the CCD track image. In addition, models of fast electron interaction physics, charge transport and readout were used to generate modeled tracks with known initial direction. These modeled tracks allowed the development and refinement of the reconstruction algorithm. The angular sensitivity of the reconstruction algorithm is evaluated extensively with models for tracks below 480 keV, showing a FWHM as low as 20° in the pixel plane, and 30° RMS sensitivity to the magnitude of the out-of-plane angle. The measurement of the trajectories of electrons with energies as low as 100 keV have the potential to make electron
Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.
2016-02-01
In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.
Information extraction and CT reconstruction of liver images based on diffraction enhanced imaging
Chunhong Hu; Tao Zhao; Lu Zhang; Hui Li; Xinyan Zhao; Shuqian Luo
2009-01-01
X-ray phase-contrast imaging (PCI) is a new emerging imaging technique that generates a high spatial resolution and high contrast of biological soft tissues compared to conventional radiography. Herein a biomedical application of diffraction enhanced imaging (DEI) is presented. As one of the PCI methods, DEI derives contrast from many different kinds of sample information, such as the sample's X-ray absorption, refraction gradient and ultra-small-angle X-ray scattering (USAXS) properties, and the sample information is expressed by three parametric images. Combined with computed tomography (CT), DEI-CT can produce 3D volumetric images of the sample and can be used for investigating micro-structures of biomedical samples. Our DEI experiments for fiver samples were implemented at the topog-raphy station of Beijing Synchrotron Radiation Facility (BSRF). The results show that by using our provided information extraction method and DEI-CT reconstruction approach, the obtained parametric images clearly display the inner structures of liver tissues and the morphology of blood vessels. Furthermore, the reconstructed 3D view of the fiver blood vessels exhibits the micro blood vessels whose minimum diameter is on the order of about tens of microns, much better than its conventional CT reconstruction at a millimeter resolution.In conclusion, both the information extraction method and DEI-CT have the potential for use in biomedical micro-structures analysis.
Patch-based image reconstruction for PET using prior-image derived dictionaries
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Pesavento, J B; Morgan, D; Bermingham, R; Zamora, D; Chromy, B; Segelke, B; Coleman, M; Xing, L; Cheng, H; Bench, G; Hoeprich, P
2007-06-07
Nanolipoprotein particles (NLPs) are small 10-20 nm diameter assemblies of apolipoproteins and lipids. At Lawrence Livermore National Laboratory (LLNL), they have constructed multiple variants of these assemblies. NLPs have been generated from a variety of lipoproteins, including apolipoprotein Al, apolipophorin III, apolipoprotein E4 22K, and MSP1T2 (nanodisc, Inc.). Lipids used included DMPC (bulk of the bilayer material), DMPE (in various amounts), and DPPC. NLPs were made in either the absence or presence of the detergent cholate. They have collected electron microscopy data as a part of the characterization component of this research. Although purified by size exclusion chromatography (SEC), samples are somewhat heterogeneous when analyzed at the nanoscale by negative stained cryo-EM. Images reveal a broad range of shape heterogeneity, suggesting variability in conformational flexibility, in fact, modeling studies point to dynamics of inter-helical loop regions within apolipoproteins as being a possible source for observed variation in NLP size. Initial attempts at three-dimensional reconstructions have proven to be challenging due to this size and shape disparity. They are pursuing a strategy of computational size exclusion to group particles into subpopulations based on average particle diameter. They show here results from their ongoing efforts at statistically and computationally subdividing NLP populations to realize greater homogeneity and then generate 3D reconstructions.
Chong Fan; Xushuai Chen; Lei Zhong; Min Zhou; Yun Shi; Yulin Duan
2017-01-01
A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This ...
Hu, E; Lasio, G; Lee, M; Chen, S; Yi, B [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)
2015-06-15
Purpose: Only a part of a treatment couch is reconstructed in CBCT due to the limited field of view (FOV). This often generates inaccurate results in the delivered dose evaluation with CBCT and more noise in the CBCT reconstruction. Full reconstruction of the couch at treatment setup can be used for more accurate exit beam dosimetry. The goal of this study is to develop a method to reconstruct a full treatment couch using a pre-scanned couch image and rigid registration. Methods: A full couch (Exact Couch, Varian) model image was reconstructed by rigidly registering and combining two sets of partial CBCT images. The full couch model includes three parts: two side rails and a couch top. A patient CBCT was reconstructed with reconstruction grid size larger than the physical field of view to include the full couch. The image quality of the couch is not good due to data truncation, but good enough to allow rigid registration of the couch. A composite CBCT image of the patient plus couch has been generated from the original reconstruction by replacing couch portion with the pre-acquired model couch, rigidly registered to the original scan. We evaluated the clinical usefulness of this method by comparing treatment plans generated on the original and on the modified scans. Results: The full couch model could be attached to a patient CBCT image set via rigid image registration. Plan DVHs showed 1∼2% difference between plans with and without full couch modeling. Conclusion: The proposed method generated a full treatment couch CBCT model, which can be successfully registered to the original patient image. This method was also shown to be useful in generating more accurate dose distributions, by lowering 1∼2% dose in PTV and a few other critical organs. Part of this study is supported by NIH R01CA133539.
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical
Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging
Virador, Patrick R.G.
2000-04-01
The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data
Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging
Virador, Patrick R.G. [Univ. of California, Berkeley, CA (United States)
2000-04-01
The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data
Accurate Image Search using Local Descriptors into a Compact Image Representation
Soumia Benkrama
2013-01-01
Full Text Available Progress in image retrieval by using low-level features, such as colors, textures and shapes, the performance is still unsatisfied as there are existing gaps between low-level features and high-level semantic concepts. In this work, we present an improved implementation for the bag of visual words approach. We propose a image retrieval system based on bag-of-features (BoF model by using scale invariant feature transform (SIFT and speeded up robust features (SURF. In literature SIFT and SURF give of good results. Based on this observation, we decide to use a bag-of-features approach over quaternion zernike moments (QZM. We compare the results of SIFT and SURF with those of QZM. We propose an indexing method for content based search task that aims to retrieve collection of images and returns a ranked list of objects in response to a query image. Experimental results with the Coil-100 and corel-1000 image database, demonstrate that QZM produces a better performance than known representations (SIFT and SURF.
Xin, Fan; Ming-Kai, Yun; Xiao-Li, Sun; Xue-Xiang, Cao; Shuang-Quanm, Liu; Pei, Chai; Dao-Wu, Li; Long, Wei
2014-01-01
In positron emission tomography (PET) imaging, statistical iterative reconstruction (IR) techniques appear particularly promising since they can provide accurate physical model and geometric system description. The reconstructed image quality mainly depends on the system matrix model which describes the relationship between image space and projection space for the IR method. The system matrix can contain some physics factors of detection such as geometrical component and blurring component. Point spread function (PSF) is generally used to describe the blurring component. This paper proposes an IR method based on the PSF system matrix, which is derived from the single photon incidence response function. More specifically, the gamma photon incidence on a crystal array is simulated by the Monte Carlo (MC) simulation, and then the single photon incidence response functions are obtained. Subsequently, using the single photon incidence response functions, the coincidence blurring factor is acquired according to the...
Zhang, Hao; Ma, Jianhua; Lu, Hongbing; Liang, Zhengrong
2014-01-01
Statistical image reconstruction (SIR) methods have shown potential to substantially improve the image quality of low-dose X-ray computed tomography (CT) as compared to the conventional filtered back-projection (FBP) method for various clinical tasks. According to the maximum a posterior (MAP) estimation, the SIR methods can be typically formulated by an objective function consisting of two terms: (1) data-fidelity (or equivalently, data-fitting or data-mismatch) term modeling the statistics of projection measurements, and (2) regularization (or equivalently, prior or penalty) term reflecting prior knowledge or expectation on the characteristics of the image to be reconstructed. Existing SIR methods for low-dose CT can be divided into two groups: (1) those that use calibrated transmitted photon counts (before log-transform) with penalized maximum likelihood (pML) criterion, and (2) those that use calibrated line-integrals (after log-transform) with penalized weighted least-squares (PWLS) criterion. Accurate s...
Su, Hai; Xing, Fuyong; Yang, Lin
2016-06-01
Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96.
Investigation of optimization-based reconstruction with an image-total-variation constraint in PET
Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan
2016-08-01
Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.
Curvelet-based sampling for accurate and efficient multimodal image registration
Safran, M. N.; Freiman, M.; Werman, M.; Joskowicz, L.
2009-02-01
We present a new non-uniform adaptive sampling method for the estimation of mutual information in multi-modal image registration. The method uses the Fast Discrete Curvelet Transform to identify regions along anatomical curves on which the mutual information is computed. Its main advantages of over other non-uniform sampling schemes are that it captures the most informative regions, that it is invariant to feature shapes, orientations, and sizes, that it is efficient, and that it yields accurate results. Extensive evaluation on 20 validated clinical brain CT images to Proton Density (PD) and T1 and T2-weighted MRI images from the public RIRE database show the effectiveness of our method. Rigid registration accuracy measured at 10 clinical targets and compared to ground truth measurements yield a mean target registration error of 0.68mm(std=0.4mm) for CT-PD and 0.82mm(std=0.43mm) for CT-T2. This is 0.3mm (1mm) more accurate in the average (worst) case than five existing sampling methods. Our method has the lowest registration errors recorded to date for the registration of CT-PD and CT-T2 images in the RIRE website when compared to methods that were tested on at least three patient datasets.
Compressed sensing for reduction of noise and artefacts in direct PET image reconstruction
Richter, Dominik; Israel, Ina; Schneider, Magdalena; Samnick, Samuel [Wuerzburg Univ. (Germany). Dept. of Nuclear Medicine; Basse-Luesebrink, Thomas C.; Kampf, Thomas; Jakob, Peter M. [Wuerzburg Univ. (Germany). Dept. of Experimental Physics 5; Fischer, Andre [Wuerzburg Univ. (Germany). Inst. of Radiology
2014-03-01
Aim: Image reconstruction in positron emission tomography (PET) can be performed using either direct or iterative methods. Direct reconstruction methods need a short reconstruction time. However, for data containing few counts, they often result in poor visual images with high noise and reconstruction artefacts. Iterative reconstruction methods such as ordered subset expectation maximization (OSEM) can lead to overestimation of activity in cold regions distorting quantitative analysis. The present work investigates the possibilities to reduce noise and reconstruction artefacts of direct reconstruction methods using compressed sensing (CS). Materials and methods: Raw data are generated either using Monte Carlo simulations using GATE or are taken from PET measurements with a Siemens Inveon small-animal PET scanner. The fully sampled dataset was reconstructed using filtered backprojection (FBP) and reduced in Fourier space by multiplication with an incoherently undersampled sampling pattern, followed by an additional reconstruction with CS. Different sampling patterns are used and an average of the reconstructions is taken. The images are compared to the results of an OSEM reconstruction and quantified using signal-to-noise ratio (SNR). Results: The application of the proposed CS post-processing technique clearly improves the image contrast. Dependent on the undersampling factor, noise and artefacts are reduced resulting in an SNR that is increased up to 3.4-fold. For short acquisition times with low count statistics the SNR of the CS reconstructed image exceeds the SNR of the OSEM reconstruction. Conclusion: Especially for low count data, the proposed CS-based post-processing method applied to FBP reconstructed PET images enhances the image quality significantly. (orig.)
Efficient DPCA SAR imaging with fast iterative spectrum reconstruction method
FANG Jian; ZENG JinShan; XU ZongBen; ZHAO Yao
2012-01-01
The displaced phase center antenna (DPCA) technique is an effective strategy to achieve wide-swath synthetic aperture radar (SAR) imaging with high azimuth resolution.However,traditionally,it requires strict limitation of the pulse repetition frequency (PRF） to avoid non-uniform sampling.Otherwise,any deviation could bring serious ambiguity if the data are directly processed using a matched filter.To break this limitation,a recently proposed spectrum reconstruction method is capable of recovering the true spectrum from the nonuniform samples. However,the performance is sensitive to the selection of the PRF.Sparse regularization based imaging may provide a way to overcome this sensitivity. The existing time-domain method,however,requires a large-scale observation matrix to be built,which brings a high computational cost.In this paper,we propose a frequency domain method,called the iterative spectrum reconstruction method,through integration of the sparse regularization technique with spectrum analysis of the DPCA signal.By approximately expressing the observation in the frequency domain,which is realized via a series of decoupled linear operations,the method performs SAR imaging which is then not directly based on the observation matrix,which reduces the computational cost from O(N2) to O(NlogN) (where N is the number of range cells),and is therefore more efficient than the time domain method. The sparse regularization scheme,realized via a fast thresholding iteration,has been adopted in this method,which brings the robustness of the imaging process to the PRF selection.We provide a series of simulations and ground based experiments to demonstrate the high efficiency and robustness of the method.The simulations show that the new method is almost as fast as the traditional mono-channel algorithm,and works well almost independently of the PRF selection.Consequently,the suggested method can be accepted as a practical and efficient wide-swath SAR imaging technique.
Modulus reconstruction from prostate ultrasound images using finite element modeling
Yan, Zhennan; Zhang, Shaoting; Alam, S. Kaisar; Metaxas, Dimitris N.; Garra, Brian S.; Feleppa, Ernest J.
2012-03-01
In medical diagnosis, use of elastography is becoming increasingly more useful. However, treatments usually assume a planar compression applied to tissue surfaces and measure the deformation. The stress distribution is relatively uniform close to the surface when using a large, flat compressor but it diverges gradually along tissue depth. Generally in prostate elastography, the transrectal probes used for scanning and compression are cylindrical side-fire or rounded end-fire probes, and the force is applied through the rectal wall. These make it very difficult to detect cancer in prostate, since the rounded contact surfaces exaggerate the non-uniformity of the applied stress, especially for the distal, anterior prostate. We have developed a preliminary 2D Finite Element Model (FEM) to simulate prostate deformation in elastography. The model includes a homogeneous prostate with a stiffer tumor in the proximal, posterior region of the gland. A force is applied to the rectal wall to deform the prostate, strain and stress distributions can be computed from the resultant displacements. Then, we assume the displacements as boundary condition and reconstruct the modulus distribution (inverse problem) using linear perturbation method. FEM simulation shows that strain and strain contrast (of the lesion) decrease very rapidly with increasing depth and lateral distance. Therefore, lesions would not be clearly visible if located far away from the probe. However, the reconstructed modulus image can better depict relatively stiff lesion wherever the lesion is located.
Poulin, E; Racine, E; Beaulieu, L [CHU de Quebec - Universite Laval, Quebec, Quebec (Canada); Binnekamp, D [Integrated Clinical Solutions and Marketing, Philips Healthcare, Best, DA (Netherlands)
2014-06-15
Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.
Aperture domain model image reconstruction (ADMIRE) with plane wave synthesis
Dei, Kazuyuki; Tierney, Jaime; Byram, Brett
2017-03-01
In our previous studies, we demonstrated that our aperture domain model-based clutter suppression algorithm improved image quality of in vivo B-mode data obtained from focused transmit beam sequences. Our approach suppresses off-axis clutter and reverberation and tackles limitations of related algorithms because it preserves RF channel signals and speckle statistics. We call the algorithm aperture domain model image reconstruction (ADMIRE). We previously focused on reverberation suppression, but ADMIRE is also effective at suppressing off-axis clutter. We are interested in how ADMIRE performs on plane wave sequences and the impact of AD- MIRE applied before and after synthetic beamforming of steered plane wave sequences. We employed simulated phantoms using Field II and tissue-mimicking phantoms to evaluate ADMIRE applied to plane wave sequencing. We generated images acquired from plane waves with and without synthetic aperture synthesis and measured contrast and contrast-to-noise ratio (CNR). For simulated cyst images formed from single plane waves, the contrast for delay-and-sum (DAS) and ADMIRE are 15.64 dB and 28.34 dB, respectively, while the CNR are 1.76 dB and 3.90 dB, respectively. Based on these findings, ADMIRE improves plane wave image quality. We also applied ADMIRE to resolution phantoms having a point target at 3 cm depth on-axis, simulating the point spread functions from data obtained from 1 and 75 steered plane waves, along with linear scan at focus of 3 and 4 cm depth. We then examined the outcome of applying ADMIRE before and after synthetic aperture processing. Finally, we applied this to an in vivo carotid artery.
Scheins, J J; Herzog, H; Shah, N J
2011-03-01
For iterative, fully 3D positron emission tomography (PET) image reconstruction intrinsic symmetries can be used to significantly reduce the size of the system matrix. The precalculation and beneficial memory-resident storage of all nonzero system matrix elements is possible where sufficient compression exists. Thus, reconstruction times can be minimized independently of the used projector and more elaborate weighting schemes, e.g., volume-of-intersection (VOI), are applicable. A novel organization of scanner-independent, adaptive 3D projection data is presented which can be advantageously combined with highly rotation-symmetric voxel assemblies. In this way, significant system matrix compression is achieved. Applications taking into account all physical lines-of-response (LORs) with individual VOI projectors are presented for the Siemens ECAT HR+ whole-body scanner and the Siemens BrainPET, the PET component of a novel hybrid-MR/PET imaging system. Measured and simulated data were reconstructed using the new method with ordered-subset-expectation-maximization (OSEM). Results are compared to those obtained by the sinogram-based OSEM reconstruction provided by the manufacturer. The higher computational effort due to the more accurate image space sampling provides significantly improved images in terms of resolution and noise.
Iterative reconstruction of images from incomplete spectral data
Rhebergen, Jan B.; van den Berg, Peter M.; Habashy, Tarek M.
1997-06-01
In various branches of engineering and science, one is confronted with measurements resulting in incomplete spectral data. The problem of the reconstruction of an image from such a data set can be formulated in terms of an integral equation of the first kind. Consequently, this equation can be converted into an equivalent integral equation of the second kind which can be solved by a Neumann-type iterative method. It is shown that this Neumann expansion is an error-reducing method and that it is equivalent to the Papoulis - Gerchberg algorithm for band-limited signal extrapolation. The integral equation can also be solved by employing a conjugate gradient iterative scheme. Again, convergence of this scheme is demonstrated. Finally a number of illustrative numerical examples are presented and discussed.
HYPR: constrained reconstruction for enhanced SNR in dynamic medical imaging
Mistretta, C.; Wieben, O.; Velikina, J.; Wu, Y.; Johnson, K.; Korosec, F.; Unal, O.; Chen, G.; Fain, S.; Christian, B.; Nalcioglu, O.; Kruger, R. A.; Block, W.; Samsonov, A.; Speidel, M.; Van Lysel, M.; Rowley, H.; Supanich, M.; Turski, P.; Wu, Yan; Holmes, J.; Kecskemeti, S.; Moran, C.; O'Halloran, R.; Keith, L.; Alexander, A.; Brodsky, E.; Lee, J. E.; Hall, T.; Zagzebski, J.
2008-03-01
During the last eight years our group has developed radial acquisitions with angular undersampling factors of several hundred that accelerate MRI in selected applications. As with all previous acceleration techniques, SNR typically falls as least as fast as the inverse square root of the undersampling factor. This limits the SNR available to support the small voxels that these methods can image over short time intervals in applications like time-resolved contrast-enhanced MR angiography (CE-MRA). Instead of processing each time interval independently, we have developed constrained reconstruction methods that exploit the significant correlation between temporal sampling points. A broad class of methods, termed HighlY Constrained Back PRojection (HYPR), generalizes this concept to other modalities and sampling dimensions.
Quantitative thermo-acoustic imaging: An exact reconstruction formula
Ammari, Habib; Jing, Wenjia; Nguyen, Loc
2012-01-01
The quantitative thermo-acoustic imaging is considered in this paper. Given several data sets of electromagnetic data, we first establish an exact formula for the absorption coefficient, which involves derivatives of the given data up to the third order. However, because of the dependence of such derivatives, this formula is unstable in the sense that small measurement noises may cause large errors. Hence, with the presence of noise, the obtained formula, together with noise regularization, provides an initial guess for the true absorption coefficient. We next correct the errors by deriving a reconstruction formula based on the least square solution of an optimal control problem and show that this optimization step reduces the errors occurring.
A Fast Super-Resolution Reconstruction from Image Sequence
无
2006-01-01
Based on the mechanism of imagery, a novel method called the delaminating combining template method, used for the problem of super-resolution reconstruction from image sequence, is described in this paper. The combining template method contains two steps: a delaminating strategy and a combining template algorithm. The delaminating strategy divides the original problem into several sub-problems;each of them is only connected to one degrading factor. The combining template algorithm is suggested to resolve each sub-problem. In addition, to verify the valid of the method, a new index called oriental entropy is presented. The results from the theoretical analysis and experiments illustrate that this method to be promising and efficient.
Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter
2016-05-01
At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts
Mathew G. Pelletier
2010-09-01
Full Text Available The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties.
Noise-free accurate count of microbial colonies by time-lapse shadow image analysis.
Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Funabashi, Hisakage; Saito, Mikako; Matsuoka, Hideaki
2012-12-01
Microbial colonies in food matrices could be counted accurately by a novel noise-free method based on time-lapse shadow image analysis. An agar plate containing many clusters of microbial colonies and/or meat fragments was trans-illuminated to project their 2-dimensional (2D) shadow images on a color CCD camera. The 2D shadow images of every cluster distributed within a 3-mm thick agar layer were captured in focus simultaneously by means of a multiple focusing system, and were then converted to 3-dimensional (3D) shadow images. By time-lapse analysis of the 3D shadow images, it was determined whether each cluster comprised single or multiple colonies or a meat fragment. The analytical precision was high enough to be able to distinguish a microbial colony from a meat fragment, to recognize an oval image as two colonies contacting each other, and to detect microbial colonies hidden under a food fragment. The detection of hidden colonies is its outstanding performance in comparison with other systems. The present system attained accuracy for counting fewer than 5 colonies and is therefore of practical importance.
Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images.
Kang, Ho Chul; Choi, Chankyu; Shin, Juneseuk; Lee, Jeongjin; Shin, Yeong-Gil
2015-01-01
In this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29 ± 0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery.
Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images
Ho Chul Kang
2015-01-01
Full Text Available DIn this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29±0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery.
An accurate and practical method for inference of weak gravitational lensing from galaxy images
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
Tricarico, Francesco; Hlavacek, Anthony M.; Schoepf, U. Joseph; Ebersberger, Ullrich; Nance, John W.; Vliegenthart, Rozemarijn; Cho, Young Jun; Spears, J. Reid; Secchi, Francesco; Savino, Giancarlo; Marano, Riccardo; Schoenberg, Stefan O.; Bonomo, Lorenzo; Apfaltrer, Paul
To evaluate image quality (IQ) of low-radiation-dose paediatric cardiovascular CT angiography (CTA), comparing iterative reconstruction in image space (IRIS) and sinogram-affirmed iterative reconstruction (SAFIRE) with filtered back-projection (FBP) and estimate the potential for further dose
Tricarico, Francesco; Hlavacek, Anthony M.; Schoepf, U. Joseph; Ebersberger, Ullrich; Nance, John W.; Vliegenthart, Rozemarijn; Cho, Young Jun; Spears, J. Reid; Secchi, Francesco; Savino, Giancarlo; Marano, Riccardo; Schoenberg, Stefan O.; Bonomo, Lorenzo; Apfaltrer, Paul
2013-01-01
To evaluate image quality (IQ) of low-radiation-dose paediatric cardiovascular CT angiography (CTA), comparing iterative reconstruction in image space (IRIS) and sinogram-affirmed iterative reconstruction (SAFIRE) with filtered back-projection (FBP) and estimate the potential for further dose reduct
Lemoigne, Yves
2008-01-01
This volume collects the lectures presented at the ninth ESI School held at Archamps (FR) in November 2006 and is dedicated to nuclear physics applications in molecular imaging. The lectures focus on the multiple facets of image reconstruction processing and management and illustrate the role of digital imaging in clinical practice. Medical computing and image reconstruction are introduced by analysing the underlying physics principles and their implementation, relevant quality aspects, clinical performance and recent advancements in the field. Several stages of the imaging process are specifically addressed, e.g. optimisation of data acquisition and storage, distributed computing, physiology and detector modelling, computer algorithms for image reconstruction and measurement in tomography applications, for both clinical and biomedical research applications. All topics are presented with didactical language and style, making this book an appropriate reference for students and professionals seeking a comprehen...
A method to detect landmark pairs accurately between intra-patient volumetric medical images.
Yang, Deshan; Zhang, Miao; Chang, Xiao; Fu, Yabo; Liu, Shi; Li, Harold H; Mutic, Sasa; Duan, Ye
2017-08-23
An image processing procedure was developed in this study to detect large quantity of landmark pairs accurately in pairs of volumetric medical images. The detected landmark pairs can be used to evaluate of deformable image registration (DIR) methods quantitatively. Landmark detection and pair matching were implemented in a Gaussian pyramid multi-resolution scheme. A 3D scale-invariant feature transform (SIFT) feature detection method and a 3D Harris-Laplacian corner detection method were employed to detect feature points, i.e., landmarks. A novel feature matching algorithm, Multi-Resolution Inverse-Consistent Guided Matching or MRICGM, was developed to allow accurate feature pairs matching. MRICGM performs feature matching using guidance by the feature pairs detected at the lower resolution stage and the higher confidence feature pairs already detected at the same resolution stage, while enforces inverse consistency. The proposed feature detection and feature pair matching algorithms were optimized to process 3D CT and MRI images. They were successfully applied between the inter-phase abdomen 4DCT images of three patients, between the original and the re-scanned radiation therapy simulation CT images of two head-neck patients, and between inter-fractional treatment MRIs of two patients. The proposed procedure was able to successfully detect and match over 6300 feature pairs on average. The automatically detected landmark pairs were manually verified and the mismatched pairs were rejected. The automatic feature matching accuracy before manual error rejection was 99.4%. Performance of MRICGM was also evaluated using seven digital phantom datasets with known ground truth of tissue deformation. On average, 11855 feature pairs were detected per digital phantom dataset with TRE = 0.77 ± 0.72 mm. A procedure was developed in this study to detect large number of landmark pairs accurately between two volumetric medical images. It allows a semi-automatic way to generate the
Sakellarios, Antonis I; Stefanou, Kostas; Siogkas, Panagiotis; Tsakanikas, Vasilis D; Bourantas, Christos V; Athanasiou, Lambros; Exarchos, Themis P; Fotiou, Evangelos; Naka, Katerina K; Papafaklis, Michail I; Patterson, Andrew J; Young, Victoria E L; Gillard, Jonathan H; Michalis, Lampros K; Fotiadis, Dimitrios I
2012-10-01
In this study, we present a novel methodology that allows reliable segmentation of the magnetic resonance images (MRIs) for accurate fully automated three-dimensional (3D) reconstruction of the carotid arteries and semiautomated characterization of plaque type. Our approach uses active contours to detect the luminal borders in the time-of-flight images and the outer vessel wall borders in the T(1)-weighted images. The methodology incorporates the connecting components theory for the automated identification of the bifurcation region and a knowledge-based algorithm for the accurate characterization of the plaque components. The proposed segmentation method was validated in randomly selected MRI frames analyzed offline by two expert observers. The interobserver variability of the method for the lumen and outer vessel wall was -1.60%±6.70% and 0.56%±6.28%, respectively, while the Williams Index for all metrics was close to unity. The methodology implemented to identify the composition of the plaque was also validated in 591 images acquired from 24 patients. The obtained Cohen's k was 0.68 (0.60-0.76) for lipid plaques, while the time needed to process an MRI sequence for 3D reconstruction was only 30 s. The obtained results indicate that the proposed methodology allows reliable and automated detection of the luminal and vessel wall borders and fast and accurate characterization of plaque type in carotid MRI sequences. These features render the currently presented methodology a useful tool in the clinical and research arena.
Optimization-based image reconstruction with artifact reduction in C-arm CBCT
Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan
2016-10-01
We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.
Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image
Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.
2016-03-01
A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Anwar A. Jabbar
2015-08-01
Full Text Available Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU based real-time maximum a-posteriori (MAP image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset when compared to existing CPU based systems.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.
2015-08-01
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.
Sajib, Saurav Z. K.; Kim, Ji Eun; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je [Department of Biomedical Engineering, Kyung Hee University, Yongin, Gyeonggi (Korea, Republic of); Kwon, Oh In, E-mail: oikwon@konkuk.ac.kr [Department of Mathematics, Konkuk University, Seoul (Korea, Republic of)
2015-03-14
Magnetic resonance electrical impedance tomography visualizes current density and/or conductivity distributions inside an electrically conductive object. Injecting currents into the imaging object along at least two different directions, induced magnetic flux density data can be measured using a magnetic resonance imaging scanner. Without rotating the object inside the scanner, we can measure only one component of the magnetic flux density denoted as B{sub z}. Since the biological tissues such as skeletal muscle and brain white matter show strong anisotropic properties, the reconstruction of anisotropic conductivity tensor is indispensable for the accurate observations in the biological systems. In this paper, we propose a direct method to reconstruct an axial apparent orthotropic conductivity tensor by using multiple B{sub z} data subject to multiple injection currents. To investigate the anisotropic conductivity properties, we first recover the internal current density from the measured B{sub z} data. From the recovered internal current density and the curl-free condition of the electric field, we derive an over-determined matrix system for determining the internal absolute orthotropic conductivity tensor. The over-determined matrix system is designed to use a combination of two loops around each pixel. Numerical simulations and phantom experimental results demonstrate that the proposed algorithm stably determines the orthotropic conductivity tensor.
Toward Optimal Computation of Ultrasound Image Reconstruction Using CPU and GPU.
Techavipoo, Udomchai; Worasawate, Denchai; Boonleelakul, Wittawat; Keinprasit, Rachaporn; Sunpetchniyom, Treepop; Sugino, Nobuhiko; Thajchayapong, Pairash
2016-11-24
An ultrasound image is reconstructed from echo signals received by array elements of a transducer. The time of flight of the echo depends on the distance between the focus to the array elements. The received echo signals have to be delayed to make their wave fronts and phase coherent before summing the signals. In digital beamforming, the delays are not always located at the sampled points. Generally, the values of the delayed signals are estimated by the values of the nearest samples. This method is fast and easy, however inaccurate. There are other methods available for increasing the accuracy of the delayed signals and, consequently, the quality of the beamformed signals; for example, the in-phase (I)/quadrature (Q) interpolation, which is more time consuming but provides more accurate values than the nearest samples. This paper compares the signals after dynamic receive beamforming, in which the echo signals are delayed using two methods, the nearest sample method and the I/Q interpolation method. The comparisons of the visual qualities of the reconstructed images and the qualities of the beamformed signals are reported. Moreover, the computational speeds of these methods are also optimized by reorganizing the data processing flow and by applying the graphics processing unit (GPU). The use of single and double precision floating-point formats of the intermediate data is also considered. The speeds with and without these optimizations are also compared.
Converse, Matthew I., E-mail: mconverse85@yahoo.com; Fullwood, David T.
2013-09-15
Current methods of image segmentation and reconstructions from scanning electron micrographs can be inadequate for resolving nanoscale gaps in composite materials (1–20 nm). Such information is critical to both accurate material characterizations and models of piezoresistive response. The current work proposes the use of crystallographic orientation data and machine learning for enhancing this process. It is first shown how a machine learning algorithm can be used to predict the connectivity of nanoscale grains in a Nickel nanostrand/epoxy composite. This results in 71.9% accuracy for a 2D algorithm and 62.4% accuracy in 3D. Finally, it is demonstrated how these algorithms can be used to predict the location of gaps between distinct nanostrands — gaps which would otherwise not be detected with the sole use of a scanning electron microscope. - Highlights: • A method is proposed for enhancing the segmentation/reconstruction of SEM images. • 3D crystallographic orientation data from a nickel nanocomposite is collected. • A machine learning algorithm is used to detect trends in adjacent grains. • This algorithm is then applied to predict likely regions of nanoscale gaps. • These gaps would otherwise be unresolved with the sole use of an SEM.
Toward Optimal Computation of Ultrasound Image Reconstruction Using CPU and GPU
Techavipoo, Udomchai; Worasawate, Denchai; Boonleelakul, Wittawat; Keinprasit, Rachaporn; Sunpetchniyom, Treepop; Sugino, Nobuhiko; Thajchayapong, Pairash
2016-01-01
An ultrasound image is reconstructed from echo signals received by array elements of a transducer. The time of flight of the echo depends on the distance between the focus to the array elements. The received echo signals have to be delayed to make their wave fronts and phase coherent before summing the signals. In digital beamforming, the delays are not always located at the sampled points. Generally, the values of the delayed signals are estimated by the values of the nearest samples. This method is fast and easy, however inaccurate. There are other methods available for increasing the accuracy of the delayed signals and, consequently, the quality of the beamformed signals; for example, the in-phase (I)/quadrature (Q) interpolation, which is more time consuming but provides more accurate values than the nearest samples. This paper compares the signals after dynamic receive beamforming, in which the echo signals are delayed using two methods, the nearest sample method and the I/Q interpolation method. The comparisons of the visual qualities of the reconstructed images and the qualities of the beamformed signals are reported. Moreover, the computational speeds of these methods are also optimized by reorganizing the data processing flow and by applying the graphics processing unit (GPU). The use of single and double precision floating-point formats of the intermediate data is also considered. The speeds with and without these optimizations are also compared. PMID:27886149
Toward Optimal Computation of Ultrasound Image Reconstruction Using CPU and GPU
Udomchai Techavipoo
2016-11-01
Full Text Available An ultrasound image is reconstructed from echo signals received by array elements of a transducer. The time of flight of the echo depends on the distance between the focus to the array elements. The received echo signals have to be delayed to make their wave fronts and phase coherent before summing the signals. In digital beamforming, the delays are not always located at the sampled points. Generally, the values of the delayed signals are estimated by the values of the nearest samples. This method is fast and easy, however inaccurate. There are other methods available for increasing the accuracy of the delayed signals and, consequently, the quality of the beamformed signals; for example, the in-phase (I/quadrature (Q interpolation, which is more time consuming but provides more accurate values than the nearest samples. This paper compares the signals after dynamic receive beamforming, in which the echo signals are delayed using two methods, the nearest sample method and the I/Q interpolation method. The comparisons of the visual qualities of the reconstructed images and the qualities of the beamformed signals are reported. Moreover, the computational speeds of these methods are also optimized by reorganizing the data processing flow and by applying the graphics processing unit (GPU. The use of single and double precision floating-point formats of the intermediate data is also considered. The speeds with and without these optimizations are also compared.
A MATLAB package for the EIDORS project to reconstruct two-dimensional EIT images.
Vauhkonen, M; Lionheart, W R; Heikkinen, L M; Vauhkonen, P J; Kaipio, J P
2001-02-01
The EIDORS (electrical impedance and diffuse optical reconstruction software) project aims to produce a software system for reconstructing images from electrical or diffuse optical data. MATLAB is a software that is used in the EIDORS project for rapid prototyping, graphical user interface construction and image display. We have written a MATLAB package (http://venda.uku.fi/ vauhkon/) which can be used for two-dimensional mesh generation, solving the forward problem and reconstructing and displaying the reconstructed images (resistivity or admittivity). In this paper we briefly describe the mathematical theory on which the codes are based on and also give some examples of the capabilities of the package.
CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization
Hongliang Qi
2015-01-01
Full Text Available Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods.
Sørensen, Thomas Sangild; Atkinson, David; Schaeffter, Tobias; Hansen, Michael Schacht
2009-12-01
A barrier to the adoption of non-Cartesian parallel magnetic resonance imaging for real-time applications has been the times required for the image reconstructions. These times have exceeded the underlying acquisition time thus preventing real-time display of the acquired images. We present a reconstruction algorithm for commodity graphics hardware (GPUs) to enable real time reconstruction of sensitivity encoded radial imaging (radial SENSE). We demonstrate that a radial profile order based on the golden ratio facilitates reconstruction from an arbitrary number of profiles. This allows the temporal resolution to be adjusted on the fly. A user adaptable regularization term is also included and, particularly for highly undersampled data, used to interactively improve the reconstruction quality. Each reconstruction is fully self-contained from the profile stream, i.e., the required coil sensitivity profiles, sampling density compensation weights, regularization terms, and noise estimates are computed in real-time from the acquisition data itself. The reconstruction implementation is verified using a steady state free precession (SSFP) pulse sequence and quantitatively evaluated. Three applications are demonstrated; real-time imaging with real-time SENSE 1) or k- t SENSE 2) reconstructions, and 3) offline reconstruction with interactive adjustment of reconstruction settings.
Reinartz, S.D.; Diefenbach, B.S.; Kuhl, C.K.; Mahnken, A.H. [University Hospital, RWTH Aachen University, Department of Diagnostic and Interventional Radiology, Aachen (Germany); Allmendinger, T. [Siemens Healthcare Sector, Department of Computed Tomography, Forchheim (Germany)
2012-12-15
To compare image quality in coronary artery computed tomography angiography (cCTA) using reconstructions with automated phase detection and Reconstructions computed with Identical Filling of the heart (RIF). Seventy-four patients underwent ECG-gated dual source CT (DSCT) between November 2009 and July 2010 for suspected coronary heart disease (n = 35), planning of transcatheter aortic valve replacement (n = 34) or evaluation of ventricular function (n = 5). Image data sets by the RIF formula and automated phase detection were computed and evaluated with the AHA 15-segment model and a 5-grade Likert scale (1: poor, 5: excellent quality). Subgroups regarding rhythm (sinus rhythm = SR; arrhythmia = ARR) and potential premedication were evaluated by a per-segment, per-vessel and per-patient analysis. RIF significantly improved image quality in 10 of 15 coronary segments (P < 0.05). More diagnostic segments were provided by RIF regarding the entire cohort (n = 693 vs. 590, P < 0.001) and all of the subgroups (e.g. ARR: n = 143 vs. 72, P < 0.001). In arrhythmic patients (n = 19), more diagnostic vessels (e.g. LAD: n = 10 vs. 3; P < 0.014) and complete data sets (n = 7 vs. 1; P < 0.001) were produced. RIF reconstruction is superior to automatic diastolic non-edited reconstructions, especially in arrhythmic patients. RIF theory provides a physiological approach for determining the optimal image reconstruction point in ECG-gated CT angiography. (orig.)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and
Frandes, M.
2010-09-15
A novel technique for radiotherapy - hadron therapy - irradiates tumors using a beam of protons or carbon ions. Hadron therapy is an effective technique for cancer treatment, since it enables accurate dose deposition due to the existence of a Bragg peak at the end of particles range. Precise knowledge of the fall-off position of the dose with millimeters accuracy is critical since hadron therapy proved its efficiency in case of tumors which are deep-seated, close to vital organs, or radio-resistant. A major challenge for hadron therapy is the quality assurance of dose delivery during irradiation. Current systems applying positron emission tomography (PET) technologies exploit gamma rays from the annihilation of positrons emitted during the beta decay of radioactive isotopes. However, the generated PET images allow only post-therapy information about the deposed dose. In addition, they are not in direct coincidence with the Bragg peak. A solution is to image the complete spectrum of the emitted gamma rays, including nuclear gamma rays emitted by inelastic interactions of hadrons to generated nuclei. This emission is isotropic, and has a spectrum ranging from 100 keV up to 20 MeV. However, the measurement of these energetic gamma rays from nuclear reactions exceeds the capability of all existing medical imaging systems. An advanced Compton scattering detection method with electron tracking capability is proposed, and modeled to reconstruct the high-energy gamma-ray events. This Compton detection technique was initially developed to observe gamma rays for astrophysical purposes. A device illustrating the method was designed and adapted to Hadron Therapy Imaging (HTI). It consists of two main sub-systems: a tracker where Compton recoiled electrons are measured, and a calorimeter where the scattered gamma rays are absorbed via the photoelectric effect. Considering a hadron therapy scenario, the analysis of generated data was performed, passing trough the complete
Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar
2015-01-01
Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912
Saumya Tiwari
Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.
Influence of iterative image reconstruction on CT-based calcium score measurements
van Osch, Jochen A. C.; Mouden, Mohamed; van Dalen, Jorn A.; Timmer, Jorik R.; Reiffers, Stoffer; Knollema, Siert; Greuter, Marcel J. W.; Ottervanger, Jan Paul; Jager, Piet L.
2014-01-01
Iterative reconstruction techniques for coronary CT angiography have been introduced as an alternative for traditional filter back projection (FBP) to reduce image noise, allowing improved image quality and a potential for dose reduction. However, the impact of iterative reconstruction on the corona
Rapid Non-Cartesian Parallel Imaging Reconstruction on Commodity Graphics Hardware
Sørensen, Thomas Sangild; Atkinson, David; Boubertakh, Redha;
2008-01-01
time per frame is now below the acquisition time providing non-Cartesian reconstruction with only minimal delay between acquisition and subsequent display of images. This is demonstrated by four-fold and eight-fold undersampled real-time radial imaging reconstructed in 25 ms to 55 ms per frame....
Divya Udayan J; HyungSeok KIM; Jee-In KIM
2015-01-01
The objective of this research is the rapid reconstruction of ancient buildings of historical importance using a single image. The key idea of our approach is to reduce the infi nite solutions that might otherwise arise when recovering a 3D geometry from 2D photographs. The main outcome of our research shows that the proposed methodology can be used to reconstruct ancient monuments for use as proxies for digital effects in applications such as tourism, games, and entertainment, which do not require very accurate modeling. In this article, we consider the reconstruction of ancient Mughal architecture including the Taj Mahal. We propose a modeling pipeline that makes an easy reconstruction possible using a single photograph taken from a single view, without the need to create complex point clouds from multiple images or the use of laser scanners. First, an initial model is automatically reconstructed using locally fi tted planar primitives along with their boundary polygons and the adjacency relation among parts of the polygons. This approach is faster and more accurate than creating a model from scratch because the initial reconstruction phase provides a set of structural information together with the adjacency relation, which makes it possible to estimate the approximate depth of the entire structural monument. Next, we use manual extrapolation and editing techniques with modeling software to assemble and adjust different 3D components of the model. Thus, this research opens up the opportunity for the present generation to experience remote sites of architectural and cultural importance through virtual worlds and real-time mobile applications. Variations of a recreated 3D monument to represent an amalgam of various cultures are targeted for future work.
Arinilhaq,; Widita, Rena [Department of Physics, Nuclear Physics and Biophysics Research Group, Institut Teknologi Bandung (Indonesia)
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arrays are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.
Multiframe image point matching and 3-d surface reconstruction.
Tsai, R Y
1983-02-01
This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.
High Resolution Image Reconstruction Method for a Double-plane PET System with Changeable Spacing
Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao; Wei, Long
2015-01-01
Positron Emission Mammography (PEM) imaging systems with the ability in detection of millimeter-sized tumors were developed in recent years. And some of them have been well used in clinical applications. In consideration of biopsy application, a double-plane detector configuration is practical for the convenience of breast immobilization. However, the serious blurring effect in the double-plane system with changeable spacing for different breast size should be studied. Methods: We study a high resolution reconstruction method applicable for a double-plane PET system with a changeable detector spacing. Geometric and blurring components should be calculated at real time for different detector distance. Accurate geometric sensitivity is obtained with a tube area model. Resolution recovery is achieved by estimating blurring effects derived from simulated single gamma response information. Results: The results show that the new geometric modeling gives a more finite and smooth sensitivity weight in double-plane sy...
A Compton scattering image reconstruction algorithm based on total variation minimization
Li Shou-Peng; Wang Lin-Yuan; Yan Bin; Li Lei; Liu Yong-Jun
2012-01-01
Compton scattering imaging is a novel radiation imaging method using scattered photons.Its main characteristics are detectors that do not have to be on the opposite side of the source,so avoiding the rotation process.The reconstruction problem of Compton scattering imaging is the inverse problem to solve electron densities from nonlinear equations,which is ill-posed.This means the solution exhibits instability and sensitivity to noise or erroneous measurements.Using the theory for reconstruction of sparse images,a reconstruction algorithm based on total variation minimization is proposed.The reconstruction problem is described as an optimization problem with nonlinear data-consistency constraint.The simulated results show that the proposed algorithm could reduce reconstruction error and improve image quality,especially when there are not enough measurements.
Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi
2010-01-01
Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095
Tanel Pärnamaa
2017-05-01
Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.
Müller, K; Maier, A K; Schwemmer, C; Lauritsch, G; De Buck, S; Wielandts, J-Y; Hornegger, J; Fahrig, R
2014-06-21
The acquisition of data for cardiac imaging using a C-arm computed tomography system requires several seconds and multiple heartbeats. Hence, incorporation of motion correction in the reconstruction step may improve the resulting image quality. Cardiac motion can be estimated by deformable three-dimensional (3D)/3D registration performed on initial 3D images of different heart phases. This motion information can be used for a motion-compensated reconstruction allowing the use of all acquired data for image reconstruction. However, the result of the registration procedure and hence the estimated deformations are influenced by the quality of the initial 3D images. In this paper, the sensitivity of the 3D/3D registration step to the image quality of the initial images is studied. Different reconstruction algorithms are evaluated for a recently proposed cardiac C-arm CT acquisition protocol. The initial 3D images are all based on retrospective electrocardiogram (ECG)-gated data. ECG-gating of data from a single C-arm rotation provides only a few projections per heart phase for image reconstruction. This view sparsity leads to prominent streak artefacts and a poor signal to noise ratio. Five different initial image reconstructions are evaluated: (1) cone beam filtered-backprojection (FDK), (2) cone beam filtered-backprojection and an additional bilateral filter (FFDK), (3) removal of the shadow of dense objects (catheter, pacing electrode, etc) before reconstruction with a cone beam filtered-backprojection (cathFDK), (4) removal of the shadow of dense objects before reconstruction with a cone beam filtered-backprojection and a bilateral filter (cathFFDK). The last method (5) is an iterative few-view reconstruction (FV), the prior image constrained compressed sensing combined with the improved total variation algorithm. All reconstructions are investigated with respect to the final motion-compensated reconstruction quality. The algorithms were tested on a mathematical
High resolution image reconstruction method for a double-plane PET system with changeable spacing
Gu, Xiao-Yue; Zhou, Wei; Li, Lin; Wei, Long; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao
2016-05-01
Breast-dedicated positron emission tomography (PET) imaging techniques have been developed in recent years. Their capacities to detect millimeter-sized breast tumors have been the subject of many studies. Some of them have been confirmed with good results in clinical applications. With regard to biopsy application, a double-plane detector arrangement is practicable, as it offers the convenience of breast immobilization. However, the serious blurring effect of the double-plane PET, with changeable spacing for different breast sizes, should be studied. We investigated a high resolution reconstruction method applicable for a double-plane PET. The distance between the detector planes is changeable. Geometric and blurring components were calculated in real-time for different detector distances, and accurate geometric sensitivity was obtained with a new tube area model. Resolution recovery was achieved by estimating blurring effects derived from simulated single gamma response information. The results showed that the new geometric modeling gave a more finite and smooth sensitivity weight in the double-plane PET. The blurring component yielded contrast recovery levels that could not be reached without blurring modeling, and improved visual recovery of the smallest spheres and better delineation of the structures in the reconstructed images were achieved with the blurring component. Statistical noise had lower variance at the voxel level with blurring modeling at matched resolution, compared to without blurring modeling. In distance-changeable double-plane PET, finite resolution modeling during reconstruction achieved resolution recovery, without noise amplification. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)
Fully 3D list-mode time-of-flight PET image reconstruction on GPUs using CUDA.
Cui, Jing-Yu; Pratx, Guillem; Prevrhal, Sven; Levin, Craig S
2011-12-01
List-mode processing is an efficient way of dealing with the sparse nature of positron emission tomography (PET) data sets and is the processing method of choice for time-of-flight (ToF) PET image reconstruction. However, the massive amount of computation involved in forward projection and backprojection limits the application of list-mode reconstruction in practice, and makes it challenging to incorporate accurate system modeling. The authors present a novel formulation for computing line projection operations on graphics processing units (GPUs) using the compute unified device architecture (CUDA) framework, and apply the formulation to list-mode ordered-subsets expectation maximization (OSEM) image reconstruction. Our method overcomes well-known GPU challenges such as divergence of compute threads, limited bandwidth of global memory, and limited size of shared memory, while exploiting GPU capabilities such as fast access to shared memory and efficient linear interpolation of texture memory. Execution time comparison and image quality analysis of the GPU-CUDA method and the central processing unit (CPU) method are performed on several data sets acquired on a preclinical scanner and a clinical ToF scanner. When applied to line projection operations for non-ToF list-mode PET, this new GPU-CUDA method is >200 times faster than a single-threaded reference CPU implementation. For ToF reconstruction, we exploit a ToF-specific optimization to improve the efficiency of our parallel processing method, resulting in GPU reconstruction >300 times faster than the CPU counterpart. For a typical whole-body scan with 75 × 75 × 26 image matrix, 40.7 million LORs, 33 subsets, and 3 iterations, the overall processing time is 7.7 s for GPU and 42 min for a single-threaded CPU. Image quality and accuracy are preserved for multiple imaging configurations and reconstruction parameters, with normalized root mean squared (RMS) deviation less than 1% between CPU and GPU
Rahmat, Mohd Fua'ad; Isa, Mohd Daud; Rahim, Ruzairi Abdul; Hussin, Tengku Ahmad Raja
2009-01-01
Electrical charge tomography (EChT) is a non-invasive imaging technique that is aimed to reconstruct the image of materials being conveyed based on data measured by an electrodynamics sensor installed around the pipe. Image reconstruction in electrical charge tomography is vital and has not been widely studied before. Three methods have been introduced before, namely the linear back projection method, the filtered back projection method and the least square method. These methods normally face ill-posed problems and their solutions are unstable and inaccurate. In order to ensure the stability and accuracy, a special solution should be applied to obtain a meaningful image reconstruction result. In this paper, a new image reconstruction method - Least squares with regularization (LSR) will be introduced to reconstruct the image of material in a gravity mode conveyor pipeline for electrical charge tomography. Numerical analysis results based on simulation data indicated that this algorithm efficiently overcomes the numerical instability. The results show that the accuracy of the reconstruction images obtained using the proposed algorithm was enhanced and similar to the image captured by a CCD Camera. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.
Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification
Zhu, Xiaofeng
2015-05-28
This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.
The research of Digital Holographic Object Wave Field Reconstruction in Image and Object Space
LI Jun-Chang; PENG Zu-Jie; FU Yun-Chang
2011-01-01
@@ For conveniently detecting objects of different sizes using digital holography, usual measurements employ the object wave transformed by an optical system with different magnifications to fit charge coupled devices (CCDs), then the object field reconstruction involves the diffraction calculation of the optic wave passing through the optical system.We propose two methods to reconstruct the object field.The one is that, when the object is imaging in an image space in which we reconstruct the image of the object field, the object field can be expressed according to the object-image relationship.The other is that, when the object field reaching CCD is imaged in an object space in which we reconstruct the object field, the optical system is described by introducing matrix optics in this paper.The reconstruction formulae which easily use classic diffraction integral are derived.Finally, experimental verifications are also accomplished.%For conveniently detecting objects of different sizes using digital holography, usual measurements employ the object wave transformed by an optical system with different magnifications to fit charge coupled devices (CCDs), then the object Reid reconstruction involves the diffraction calculation of the optic wave passing through the optical system. We propose two methods to reconstruct the object field. The one is that, when the object is imaging in an image space in which we reconstruct the image of the object field, the object field can be expressed according to the object-image relationship. The other is that, when the object field reaching CCD is imaged in an object space in which we reconstruct the object field, the optical system is described by introducing matrix optics in this paper. The reconstruction formulae which easily use classic diffraction integral are derived. Finally, experimental verifications are also accomplished.
Benincasa, Anne B.; Clements, Logan W.; Herrell, S. Duke; Galloway, Robert L.
2008-01-01
A notable complication of applying current image-guided surgery techniques of soft tissue to kidney resections (nephrectomies) is the limited field of view of the intraoperative kidney surface. This limited view constrains the ability to obtain a sufficiently geometrically descriptive surface for accurate surface-based registrations. The authors examined the effects of the limited view by using two orientations of a kidney phantom to model typical laparoscopic and open partial nephrectomy views. Point-based registrations, using either rigidly attached markers or anatomical landmarks as fiducials, served as initial alignments for surface-based registrations. Laser range scanner (LRS) obtained surfaces were registered to the phantom’s image surface using a rigid iterative closest point algorithm. Subsets of each orientation’s LRS surface were used in a robustness test to determine which parts of the surface yield the most accurate registrations. Results suggest that obtaining accurate registrations is a function of the percentage of the total surface and of geometric surface properties, such as curvature. Approximately 28% of the total surface is required regardless of the location of that surface subset. However, that percentage decreases when the surface subset contains information from opposite ends of the surface and∕or unique anatomical features, such as the renal artery and vein. PMID:18841875
Mihailescu, Mona; Scarlat, Mihaela; Gheorghiu, Alexandru; Costescu, Julia; Kusko, Mihai; Paun, Irina Alexandra; Scarlat, Eugen
2011-07-01
This paper presents our method, which simultaneously combines automatic imaging, identification, and counting with the acquisition of morphological information for at least 1000 blood cells from several three-dimensional images of the same sample. We started with seeking parameters to differentiate between red blood cells that are similar but different with respect to their development stage, i.e., mature or immature. We highlight that these cells have different diffractive patterns with complementary central intensity distribution in a given plane along the propagation axis. We use the Fresnel approximation to simulate propagation through cells modeled as spheroid-shaped phase objects and to find the cell property that has the dominant influence on this behavior. Starting with images obtained in the reconstruction step of the digital holographic microscopy technique, we developed a code for automated simultaneous individual cell image separation, identification, and counting, even when the cells are partially overlapped on a slide, and accurate measuring of their morphological features. To find the centroids of each cell, we propose a method based on analytical functions applied at threshold intervals. Our procedure separates the mature from the immature red blood cells and from the white blood cells through a decision based on gradient and radius values.
Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho
2015-12-10
In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.
Color image super-resolution reconstruction based on POCS with edge preserving
Wang, Rui; Liang, Ying; Liang, Yu
2015-10-01
A color image super-resolution (SR) reconstruction based on an improved Projection onto Convex Sets (POCS) in YCbCr space is proposed. Compared with other methods, the POCS method is more intuitive and generally simple to implement. However, conventional POCS algorithm is strict to the accuracy of movement estimation and it is not conducive to the resumption of the edge and details of images. Addressed to these two problems, we on one hand improve the LOG operator to detect edges with the directions of +/-0°, +/-45°, +/-90°, +/-135° in order to inhibit the edge degradation. Then, by using the edge information, we proposed a self-adaptive edge-directed interpolation and a modified adaptive direction PSF to construct a reference image as well as to reduce the edge oscillation when revising the reference respectively. On the other hand, instead of block-matching, the Speeded up Robust Feature (SURF) matching algorithm, which can accurately extract the feature points with invariant to affine transform, rotation, scale, illumination changes, are utilized to improve the robustness and real-time in motion estimation. The performance of the proposed approach has been tested on several images and the obtained results demonstrate that it is competitive or rather better in quality and efficiency in comparison with the traditional POCS.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and
On multigrid methods for image reconstruction from projections
Henson, V.E.; Robinson, B.T. [Naval Postgraduate School, Monterey, CA (United States); Limber, M. [Simon Fraser Univ., Burnaby, British Columbia (Canada)
1994-12-31
The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup