WorldWideScience

Sample records for source reconstruction problem

  1. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  2. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    DEFF Research Database (Denmark)

    Karamehmedovic, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-01-01

    setting: From measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier-Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction......, and under an additional, mild assumption, the reconstruction method is shown to be stable." Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method...

  3. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  4. Reconstructing source-sink dynamics in a population with a pelagic dispersal phase.

    Directory of Open Access Journals (Sweden)

    Kun Chen

    Full Text Available For many organisms, the reconstruction of source-sink dynamics is hampered by limited knowledge of the spatial assemblage of either the source or sink components or lack of information on the strength of the linkage for any source-sink pair. In the case of marine species with a pelagic dispersal phase, these problems may be mitigated through the use of particle drift simulations based on an ocean circulation model. However, when simulated particle trajectories do not intersect sampling sites, the corroboration of model drift simulations with field data is hampered. Here, we apply a new statistical approach for reconstructing source-sink dynamics that overcomes the aforementioned problems. Our research is motivated by the need for understanding observed changes in jellyfish distributions in the eastern Bering Sea since 1990. By contrasting the source-sink dynamics reconstructed with data from the pre-1990 period with that from the post-1990 period, it appears that changes in jellyfish distribution resulted from the combined effects of higher jellyfish productivity and longer dispersal of jellyfish resulting from a shift in the ocean circulation starting in 1991. A sensitivity analysis suggests that the source-sink reconstruction is robust to typical systematic and random errors in the ocean circulation model driving the particle drift simulations. The jellyfish analysis illustrates that new insights can be gained by studying structural changes in source-sink dynamics. The proposed approach is applicable for the spatial source-sink reconstruction of other species and even abiotic processes, such as sediment transport.

  5. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  6. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  7. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  8. Dual-Source Swept-Source Optical Coherence Tomography Reconstructed on Integrated Spectrum

    Directory of Open Access Journals (Sweden)

    Shoude Chang

    2012-01-01

    Full Text Available Dual-source swept-source optical coherence tomography (DS-SSOCT has two individual sources with different central wavelengths, linewidth, and bandwidths. Because of the difference between the two sources, the individually reconstructed tomograms from each source have different aspect ratio, which makes the comparison and integration difficult. We report a method to merge two sets of DS-SSOCT raw data in a common spectrum, on which both data have the same spectrum density and a correct separation. The reconstructed tomographic image can seamlessly integrate the two bands of OCT data together. The final image has higher axial resolution and richer spectroscopic information than any of the individually reconstructed tomography image.

  9. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  10. An inverse source problem of the Poisson equation with Cauchy data

    Directory of Open Access Journals (Sweden)

    Ji-Chuan Liu

    2017-05-01

    Full Text Available In this article, we study an inverse source problem of the Poisson equation with Cauchy data. We want to find iterative algorithms to detect the hidden source within a body from measurements on the boundary. Our goal is to reconstruct the location, the size and the shape of the hidden source. This problem is ill-posed, regularization techniques should be employed to obtain the regularized solution. Numerical examples show that our proposed algorithms are valid and effective.

  11. On a full Bayesian inference for force reconstruction problems

    Science.gov (United States)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  12. Two-way regularization for MEG source reconstruction via multilevel coordinate descent

    KAUST Repository

    Siva Tian, Tian

    2013-12-01

    Magnetoencephalography (MEG) source reconstruction refers to the inverse problem of recovering the neural activity from the MEG time course measurements. A spatiotemporal two-way regularization (TWR) method was recently proposed by Tian et al. to solve this inverse problem and was shown to outperform several one-way regularization methods and spatiotemporal methods. This TWR method is a two-stage procedure that first obtains a raw estimate of the source signals and then refines the raw estimate to ensure spatial focality and temporal smoothness using spatiotemporal regularized matrix decomposition. Although proven to be effective, the performance of two-stage TWR depends on the quality of the raw estimate. In this paper we directly solve the MEG source reconstruction problem using a multivariate penalized regression where the number of variables is much larger than the number of cases. A special feature of this regression is that the regression coefficient matrix has a spatiotemporal two-way structure that naturally invites a two-way penalty. Making use of this structure, we develop a computationally efficient multilevel coordinate descent algorithm to implement the method. This new one-stage TWR method has shown its superiority to the two-stage TWR method in three simulation studies with different levels of complexity and a real-world MEG data analysis. © 2013 Wiley Periodicals, Inc., A Wiley Company.

  13. An analytical statistical approach to the 3D reconstruction problem

    Energy Technology Data Exchange (ETDEWEB)

    Cierniak, Robert [Czestochowa Univ. of Technology (Poland). Inst. of Computer Engineering

    2011-07-01

    The presented here approach is concerned with the reconstruction problem for 3D spiral X-ray tomography. The reconstruction problem is formulated taking into considerations the statistical properties of signals obtained in X-ray CT. Additinally, image processing performed in our approach is involved in analytical methodology. This conception significantly improves quality of the obtained after reconstruction images and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations proved that schematically described here reconstruction algorithm outperforms conventional analytical methods in obtained image quality. (orig.)

  14. Simultaneous reconstruction of material and transient source parameters using the invariant imbedding method

    International Nuclear Information System (INIS)

    Corones, J.; Sun, Z.

    1993-01-01

    This paper extends the time domain wave splitting and invariant imbedding method to an inhomogeneous wave equation with a source term: u xx -u tt +A(x)u x =2D(x)i'(t). The direct scattering and inverse source problems of this equation are studied. Operators J ± that map the source function into the scattered waves at the edges of the slab are defined. A system of coupled nonlinear integrodifferential equations for these scattering operator kernels is obtained. The direct scattering problem is to obtain the scattering operator kernels J ± and R + when parameters A and D are given. The inverse problem is to simultaneously reconstruct A(x) and D(x) from the scattering operator kernels R + (0,t), 0≤t≤2 and J - (0,t), 0≤t≤1. Both numerical inversion algorithms and the small time approximate reconstruction method are presented. A Green's function technique is used to derive Green's operator kernel equations for the calculation of the internal field. It provides an alternative effective and fast way to compute the scattering kernels J ± . For constant A and D the Green's operator kernels and source scattering kernels are expressed in closed form. Several numerical examples are given

  15. Reconstruction of multiple line source attenuation maps

    International Nuclear Information System (INIS)

    Celler, A.; Sitek, A.; Harrop, R.

    1996-01-01

    A simple configuration for a transmission source for the single photon emission computed tomography (SPECT) was proposed, which utilizes a series of collimated line sources parallel to the axis of rotation of a camera. The detector is equipped with a standard parallel hole collimator. We have demonstrated that this type of source configuration can be used to generate sufficient data for the reconstruction of the attenuation map when using 8-10 line sources spaced by 3.5-4.5 cm for a 30 x 40cm detector at 65cm distance from the sources. Transmission data for a nonuniform thorax phantom was simulated, then binned and reconstructed using filtered backprojection (FBP) and iterative methods. The optimum maps are obtained with data binned into 2-3 bins and FBP reconstruction. The activity in the source was investigated for uniform and exponential activity distributions, as well as the effect of gaps and overlaps of the neighboring fan beams. A prototype of the line source has been built and the experimental verification of the technique has started

  16. Strategy for fitting source strength and reconstruction procedure in radioactive particle tracking

    International Nuclear Information System (INIS)

    Mosorov, Volodymyr

    2015-01-01

    The Radioactive Particle Tracking (RPT) technique is widely applied to study the dynamic properties of flows inside a reactor. Usually, a single radioactive particle that is neutrally buoyant with respect to the phase is used as a tracker. The particle moves inside a 3D volume of interest, and its positions are determined by an array of scintillation detectors, which count the incoming photons. The particle position coordinates are calculated by using a reconstruction procedure that solves a minimization problem between the measured counts and calibration data. Although previous studies have described the influence of specified factors on the RPT resolution and sensitivities, the question of how to choose an appropriate source strength and reconstruction procedure for the given RPT setup remains an unsolved problem. This work describes and applies the original strategy for fitting both the source strength and the sampling time interval to a specified RPT setup to guarantee a required accuracy of measurements. Additionally, the measurement accuracy of an RPT setup can be significantly increased by changing the reconstruction procedure. The results of the simulations, based on the Monte Carlo approach, have demonstrated that the proposed strategy allows for the successful implementation of the As Low As Reasonably Achievable (ALARA) principle when designing the RPT setup. The limitations and drawbacks of the proposed procedure are also presented. - Highlights: • We develop an original strategy for fitting source strength and measurement time interval in radioactive particle tracking (RPT) technique. • The proposed strategy allows successfully to implement the ALAPA (As Low As Reasonably Achievable) principle in designing of a RPT setup. • Measurement accuracy of a RPT setup can be significantly increased by improvement of the reconstruction procedure. • The algorithm can be applied to monitor the motion of the radioactive tracer in a reactor

  17. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  18. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    Science.gov (United States)

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  19. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    Science.gov (United States)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  20. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    Science.gov (United States)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  1. Numerical reconstruction of tsunami source using combined seismic, satellite and DART data

    Science.gov (United States)

    Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor

    2014-05-01

    function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.

  2. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.

    Science.gov (United States)

    López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.

  3. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  4. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    Science.gov (United States)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  5. Time-dependent problems in quantum-mechanical state reconstruction

    International Nuclear Information System (INIS)

    Leonhardt, U.; Bardroff, P. J.

    1997-01-01

    We study the state reconstruction of wave packets that travel in time-dependent potentials. We solve the problem for explicitly time-dependent potentials. We solve the problem for explicitly time-dependent harmonic oscillators and sketch a general adaptive technique for finding the wave function that matches and observed evolution. (authors)

  6. Pathgroups, a dynamic data structure for genome reconstruction problems.

    Science.gov (United States)

    Zheng, Chunfang

    2010-07-01

    Ancestral gene order reconstruction problems, including the median problem, quartet construction, small phylogeny, guided genome halving and genome aliquoting, are NP hard. Available heuristics dedicated to each of these problems are computationally costly for even small instances. We present a data structure enabling rapid heuristic solution to all these ancestral genome reconstruction problems. A generic greedy algorithm with look-ahead based on an automatically generated priority system suffices for all the problems using this data structure. The efficiency of the algorithm is due to fast updating of the structure during run time and to the simplicity of the priority scheme. We illustrate with the first rapid algorithm for quartet construction and apply this to a set of yeast genomes to corroborate a recent gene sequence-based phylogeny. http://albuquerque.bioinformatics.uottawa.ca/pathgroup/Quartet.html chunfang313@gmail.com Supplementary data are available at Bioinformatics online.

  7. The co phylogeny reconstruction problem is NP-complete.

    Science.gov (United States)

    Ovadia, Y; Fielder, D; Conow, C; Libeskind-Hadas, R

    2011-01-01

    The co phylogeny reconstruction problem is that of finding minimum cost explanations of differences between historical associations. The problem arises in parasitology, molecular systematics, and biogeography. Existing software tools for this problem either have worst-case exponential time or use heuristics that do not guarantee optimal solutions. To date, no polynomial time optimal algorithms have been found for this problem. In this article, we prove that the problem is NP-complete, suggesting that future research on algorithms for this problem should seek better polynomial-time approximation algorithms and heuristics rather than optimal solutions.

  8. Source localization in electromyography using the inverse potential problem

    Science.gov (United States)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  9. Source localization in electromyography using the inverse potential problem

    International Nuclear Information System (INIS)

    Van den Doel, Kees; Ascher, Uri M; Pai, Dinesh K

    2011-01-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting

  10. Reconstructing the Hopfield network as an inverse Ising problem

    International Nuclear Information System (INIS)

    Huang Haiping

    2010-01-01

    We test four fast mean-field-type algorithms on Hopfield networks as an inverse Ising problem. The equilibrium behavior of Hopfield networks is simulated through Glauber dynamics. In the low-temperature regime, the simulated annealing technique is adopted. Although performances of these network reconstruction algorithms on the simulated network of spiking neurons are extensively studied recently, the analysis of Hopfield networks is lacking so far. For the Hopfield network, we found that, in the retrieval phase favored when the network wants to memory one of stored patterns, all the reconstruction algorithms fail to extract interactions within a desired accuracy, and the same failure occurs in the spin-glass phase where spurious minima show up, while in the paramagnetic phase, albeit unfavored during the retrieval dynamics, the algorithms work well to reconstruct the network itself. This implies that, as an inverse problem, the paramagnetic phase is conversely useful for reconstructing the network while the retrieval phase loses all the information about interactions in the network except for the case where only one pattern is stored. The performances of algorithms are studied with respect to the system size, memory load, and temperature; sample-to-sample fluctuations are also considered.

  11. Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source

    Science.gov (United States)

    Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.

    2017-11-01

    Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.

  12. Mandible reconstruction: History, state of the art and persistent problems.

    Science.gov (United States)

    Ferreira, José J; Zagalo, Carlos M; Oliveira, Marta L; Correia, André M; Reis, Ana R

    2015-06-01

    Mandibular reconstruction has been experiencing an amazing evolution. Several different approaches are used to reconstruct this bone and therefore have a fundamental role in the recovery of oral functions. This review aims to highlight the persistent problems associated with the approaches identified, whether bone grafts or prosthetic devices are used. A brief summary of the historical evolution of the surgical procedures is presented, as well as an insight into possible future pathways. A literature review was conducted from September to December 2012 using the PubMed database. The keyword used was "mandible reconstruction." Articles published in the last three years were included as well as the relevant references from those articles and the "historical articles" were referred. This research resulted in a monograph that this article aims to summarize. Titanium plates, bone grafts, pediculate flaps, free osteomyocutaneous flaps, rapid prototyping, and tissue engineering strategies are some of the identified possibilities. The classical approaches present considerable associated morbidity donor-site-related problems. Research that results in the development of new prosthetics devices is needed. A new prosthetic approach could minimize the identified problems and offer the patients more predictable, affordable, and comfortable solutions. This review, while affirming the evolution and the good results found with the actual approaches, emphasizes the negative aspects that still subsist. Thus, it shows that mandible reconstruction is not a closed issue. On the contrary, it remains as a research field where new findings could have a direct positive impact on patients' life quality. The identification of the persistent problems reveals the characteristics to be considered in a new prosthetic device. This could overcome the current difficulties and result in more comfortable solutions. Medical teams have the responsibility to keep patients informed about the predictable

  13. Women and post-conflict reconstruction: Issues and sources

    OpenAIRE

    Sørensen, Birgitte

    1998-01-01

    Women and Post-Conflict Reconstruction: Issues and Sources is a review of literature dealing with political, economic and social reconstruction from a gender perspective. One of its objectives is to go beyond conventional images of women as victims of war, and to document the many different ways in which women make a contribution to the rebuilding of countries emerging from armed conflicts. Special attention is given to women's priority concerns, to their resources and capacities, and to stru...

  14. Scaled nonuniform Fourier transform for image reconstruction in swept source optical coherence tomography

    Science.gov (United States)

    Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-02-01

    Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.

  15. Gadgetron: An Open Source Framework for Medical Image Reconstruction

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-01-01

    This work presents a new open source framework for medical image reconstruction called the “Gadgetron.” The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or “Gadgets” from raw data to reconstructed images...... with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its...

  16. Architectural and town-planning reconstruction problems of the city of Voronezh

    Science.gov (United States)

    Mikhaylova, TTatyana; Parshin, Dmitriy; Shoshinov, Vitaly; Trebukhin, Anatoliy

    2018-03-01

    The analysis of the state of the historically developed urban district of the city of Voronezh is made. The ways of solving the identified architectural and urban problems of reconstruction of historically developed buildings are proposed. The concept of reconstruction of a territory with historical buildings along Vaytsekhovsky Street is presented.

  17. Radiation source reconstruction with known geometry and materials using the adjoint

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Azmy, Yousry Y.

    2011-01-01

    We present a method to estimate an unknown isotropic source distribution, in space and energy, using detector measurements when the geometry and material composition are known. The estimated source distribution minimizes the difference between the measured and computed responses of detectors located at a selected number of points within the domain. In typical methods, a forward flux calculation is performed for each source guess in an iterative process. In contrast, we use the adjoint flux to compute the responses. Potential applications of the proposed method include determining the distribution of radio-contaminants following a nuclear event, monitoring the flow of radioactive fluids in pipes to determine hold-up locations, and retroactive reconstruction of radiation fields using workers' detectors' readings. After presenting the method, we describe a numerical test problem to demonstrate the preliminary viability of the method. As expected, using the adjoint flux reduces the number of transport solves to be proportional to the number of detector measurements, in contrast to methods using the forward flux that require a typically larger number proportional to the number of spatial mesh cells. (author)

  18. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  19. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source.

    Science.gov (United States)

    Atwood, Robert C; Bodey, Andrew J; Price, Stephen W T; Basham, Mark; Drakopoulos, Michael

    2015-06-13

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an 'orthogonal' fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and 'facility-independent': it can run on standard cluster infrastructure at any institution.

  20. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  1. An evolutionary algorithm for tomographic reconstructions in limited data sets problems

    International Nuclear Information System (INIS)

    Turcanu, Catrinel; Craciunescu, Teddy

    2000-01-01

    The paper proposes a new method for tomographic reconstructions. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited angle views. The problem of image reconstruction from projections may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by an evolutionary algorithm. Our algorithm has some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated in comparison with a traditional tomographic method, based on the maximization of the entropy of the image, that proved to work well with limited data sets. The test phantom is typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise

  2. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    International Nuclear Information System (INIS)

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  4. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  5. Uncertainty principles for inverse source problems for electromagnetic and elastic waves

    Science.gov (United States)

    Griesmaier, Roland; Sylvester, John

    2018-06-01

    In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.

  6. Reconstruction formula for a 3-d phaseless inverse scattering problem for the Schrodinger equation

    OpenAIRE

    Klibanov, Michael V.; Romanov, Vladimir G.

    2014-01-01

    The inverse scattering problem of the reconstruction of the unknown potential with compact support in the 3-d Schr\\"odinger equation is considered. Only the modulus of the scattering complex valued wave field is known, whereas the phase is unknown. It is shown that the unknown potential can be reconstructed via the inverse Radon transform. Therefore, a long standing problem posed in 1977 by K. Chadan and P.C. Sabatier in their book "Inverse Problems in Quantum Scattering Theory" is solved.

  7. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  8. Reconstruction of source location in a network of gravitational wave interferometric detectors

    International Nuclear Information System (INIS)

    Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Clapson, Andre-Claude; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Leroy, Nicolas; Varvella, Monica

    2006-01-01

    This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a χ 2 minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructed one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky

  9. Performance bounds for sparse signal reconstruction with multiple side information [arXiv

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Seiler, Jurgen; Kaup, Andre

    2016-01-01

    In the context of compressive sensing (CS), this paper considers the problem of reconstructing sparse signals with the aid of other given correlated sources as multiple side information (SI). To address this problem, we propose a reconstruction algorithm with multiple SI (RAMSI) that solves...

  10. Source Plane Reconstruction of the Bright Lensed Galaxy RCSGA 032727-132609

    Science.gov (United States)

    Sharon, Keren; Gladders, Michael D.; Rigby, Jane R.; Wuyts, Eva; Koester, Benjamin P.; Bayliss, Matthew B.; Barrientos, L. Felipe

    2011-01-01

    We present new HST/WFC3 imaging data of RCS2 032727-132609, a bright lensed galaxy at z=1.7 that is magnified and stretched by the lensing cluster RCS2 032727-132623. Using this new high-resolution imaging, we modify our previous lens model (which was based on ground-based data) to fully understand the lensing geometry, and use it to reconstruct the lensed galaxy in the source plane. This giant arc represents a unique opportunity to peer into 100-pc scale structures in a high redshift galaxy. This new source reconstruction will be crucial for a future analysis of the spatially-resolved rest-UV and rest-optical spectra of the brightest parts of the arc.

  11. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  12. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  13. Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2009-01-01

    . First, surfaces of an extended source are divided into reasonably small segments. From each source segment, one beam is projected into the field and all emitted beams are traced. Radiated beams from the source reach array sensors after traveling various paths including the wall reflections. Collecting...... all the pressure histories at the field points, source-observer relations can be constructed in a matrix-vector form for each frequency. By multiplying the measured field data with the pseudo-inverse of the calculated transfer function, one obtains the distribution of source pressure. An omni......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...

  14. A Solution to Hammer's X-ray Reconstruction Problem

    DEFF Research Database (Denmark)

    Gardner, Richard J.; Kiderlen, Markus

    2007-01-01

    We propose algorithms for reconstructing a planar convex body K from possibly noisy measurements of either its parallel X-rays taken in a fixed finite set of directions or its point X-rays taken at a fixed finite set of points, in known situations that guarantee a unique solution when the data is...... to K in the Hausdorff metric as k tends to infinity. This solves, for the first time in the strongest sense, Hammer’s X-ray problem published in 1963....

  15. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  16. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2013-01-01

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously

  17. Atmospheric inverse modeling via sparse reconstruction

    Science.gov (United States)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  18. EXTEND OPERATION PROBLEMS AND RECONSTRUCTION OF LARGEPANEL FIVE-STOREY BUILDINGS OF 50-60-IES XX CENTURY

    Directory of Open Access Journals (Sweden)

    BOLSHAKOV V. I.,

    2016-01-01

    Full Text Available Raising of the problem. In many regions is utilised housing, that age is more than half a century. According to the research materials of the analytical center of Ukrainian Cities Association there are 25,5 thousand houses built by first mass series project of large, block and brick buildings with a total area of 72 million M2 today in the state, rather those, that require reconstruction and modernization. In general, most of the housing stock of Ukraine is in a poor technical condition due to its deficient funding; it keeps the tendency of premature aging of the housing stock.One of the major problems of modern construction industry is the continuation of housing exploitation, in particular is it the building era of mass construction of 50-60-ies of XX century, called "Khrushchevki". According to the State Statistics Service of Ukraine the deterioration of residential buildings in Ukraine amounts to 47.2%, which makes us think of the immediate actions to occure this situation. The most acceptable way, at first viewe, seems the reconstruction of "Khrushchvki". However, the reconstruction is a complex problem that requires the construction industry solution due to the economic component, the social factor, the views of residents of these homes to create a technological and economical viable result. Analysis of publications. The problem of the "Khrushchevki" reconstruction is the subject of continual researches of leading builders of Ukraine. In the researchers' attention just as the technological problems [1 - 3], so economic components [4 - 6], in general, give an idea of the work scale required to overcome the impending crisis. The purpose of the article. Defining the main problems of exploatation of panel fivestory buildings of 50 - 60-ies twentieth century and their residents thoughts about existing inconvenience, as well as associated economic, technological and legal problems in the implementation of buildings reconstruction. Conclusions

  19. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK)

    International Nuclear Information System (INIS)

    Rit, S; Vila Oliva, M; Sarrut, D; Brousmiche, S; Labarbe, R; Sharp, G C

    2014-01-01

    We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.

  20. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  1. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing

    NARCIS (Netherlands)

    Cohen, M.X.; Ridderinkhof, K.R.

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict

  2. Numerical method in reproducing kernel space for an inverse source problem for the fractional diffusion equation

    International Nuclear Information System (INIS)

    Wang, Wenyan; Han, Bo; Yamamoto, Masahiro

    2013-01-01

    We propose a new numerical method for reproducing kernel Hilbert space to solve an inverse source problem for a two-dimensional fractional diffusion equation, where we are required to determine an x-dependent function in a source term by data at the final time. The exact solution is represented in the form of a series and the approximation solution is obtained by truncating the series. Furthermore, a technique is proposed to improve some of the existing methods. We prove that the numerical method is convergent under an a priori assumption of the regularity of solutions. The method is simple to implement. Our numerical result shows that our method is effective and that it is robust against noise in L 2 -space in reconstructing a source function. (paper)

  3. Time-stretch microscopy based on time-wavelength sequence reconstruction from wideband incoherent source

    International Nuclear Information System (INIS)

    Zhang, Chi; Xu, Yiqing; Wei, Xiaoming; Tsia, Kevin K.; Wong, Kenneth K. Y.

    2014-01-01

    Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated by a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier

  4. Network reconstruction via graph blending

    Science.gov (United States)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  5. Three-dimensional reconstruction of neutron, gamma-ray, and x-ray sources using spherical harmonic decomposition

    Science.gov (United States)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D.; Geppert-Kleinrath, V.; Grim, G.; Merrill, F. E.; Wilde, C. H.

    2017-11-01

    Neutron, gamma-ray, and x-ray imaging are important diagnostic tools at the National Ignition Facility (NIF) for measuring the two-dimensional (2D) size and shape of the neutron producing region, for probing the remaining ablator and measuring the extent of the DT plasmas during the stagnation phase of Inertial Confinement Fusion implosions. Due to the difficulty and expense of building these imagers, at most only a few two-dimensional projections images will be available to reconstruct the three-dimensional (3D) sources. In this paper, we present a technique that has been developed for the 3D reconstruction of neutron, gamma-ray, and x-ray sources from a minimal number of 2D projections using spherical harmonics decomposition. We present the detailed algorithms used for this characterization and the results of reconstructed sources from experimental neutron and x-ray data collected at OMEGA and NIF.

  6. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  7. On a problem of reconstruction of a discontinuous function by its Radon transform

    Energy Technology Data Exchange (ETDEWEB)

    Derevtsov, Evgeny Yu.; Maltseva, Svetlana V.; Svetov, Ivan E. [Sobolev Institute of Mathematics of SB RAS, 630090, Novosibirsk (Russian Federation); Novosibirsk State University, 630090, Novosibirsk (Russian Federation); Sultanov, Murat A. [H. A. Yassawe International Kazakh-Turkish University, 161200, Turkestan (Kazakhstan)

    2016-08-10

    A problem of reconstruction of a discontinuous function by its Radon transform is considered. One of the approaches to the numerical solution for the problem consists in the next sequential steps: a visualization of a set of breaking points; an identification of this set; a determination of jump values; an elimination of discontinuities. We consider three of listed problems except the problem of jump values. The problems are investigated by mathematical modeling using numerical experiments. The results of simulation are satisfactory and allow to hope for the further development of the approach.

  8. Atmospheric dispersion and inverse modelling for the reconstruction of accidental sources of pollutants

    International Nuclear Information System (INIS)

    Winiarek, Victor

    2014-01-01

    Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations: accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various: predict the contaminated zones to apply first countermeasures such as evacuation of concerned population; determine the source location; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account. We will then present several inverse modeling methods to estimate the emission as well as statistical methods to estimate prior errors, to which the inversion is very sensitive. Several case studies are presented, using synthetic data as well as real data such as the estimation of source terms from the Fukushima accident in March 2011. From our results, we estimate the Cesium-137 emission to be between 12 and 19 PBq with a standard deviation between 15 and 65% and the Iodine-131 emission to be between 190 and 380 PBq with a standard deviation between 5 and 10%. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand nonparametric methods attempt to reconstruct a full emission field. Several parametric and nonparametric methods are

  9. Image reconstruction from multiple fan-beam projections

    International Nuclear Information System (INIS)

    Jelinek, J.; Overton, T.R.

    1984-01-01

    Special-purpose third-generation fan-beam CT systems can be greatly simplified by limiting the number of detectors, but this requires a different mode of data collection to provide a set of projections appropriate to the required spatial resolution in the reconstructed image. Repeated rotation of the source-detector fan, combined with shift of the detector array and perhaps offset of the source with respect to the fan's axis after each 360 0 rotation(cycle), provides a fairly general pattern of projection space filling. The authors' investigated the problem of optimal data-collection geometry for a multiple-rotation fan-beam scanner and of corresponding reconstruction algorithm

  10. Automatic selection of optimal systolic and diastolic reconstruction windows for dual-source CT coronary angiography

    International Nuclear Information System (INIS)

    Seifarth, H.; Puesken, M.; Wienbeck, S.; Maintz, D.; Heindel, W.; Juergens, K.U.; Fischbach, R.

    2009-01-01

    The aim of this study was to assess the performance of a motion-map algorithm that automatically determines optimal reconstruction windows for dual-source coronary CT angiography. In datasets from 50 consecutive patients, optimal systolic and diastolic reconstruction windows were determined using the motion-map algorithm. For manual determination of the optimal reconstruction window, datasets were reconstructed in 5% steps throughout the RR interval. Motion artifacts were rated for each major coronary vessel using a five-point scale. Mean motion scores using the motion-map algorithm were 2.4 ± 0.8 for systolic reconstructions and 1.9 ± 0.8 for diastolic reconstructions. Using the manual approach, overall motion scores were significantly better (1.9 ± 0.5 and 1.7 ± 0.6, p 90% of cases using either approach. Using the automated approach, there was a negative correlation between heart rate and motion scores for systolic reconstructions (ρ = -0.26, p 80 bpm (systolic reconstruction). (orig.)

  11. Iterative Reconstruction Methods for Inverse Problems in Tomography with Hybrid Data

    DEFF Research Database (Denmark)

    Sherina, Ekaterina

    . The goal of these modalities is to quantify physical parameters of materials or tissues inside an object from given interior data, which is measured everywhere inside the object. The advantage of these modalities is that large variations in physical parameters can be resolved and therefore, they have...... data is precisely the reason why reconstructions with a high contrast and a high resolution can be expected. The main contributions of this thesis consist in formulating the underlying mathematical problems with interior data as nonlinear operator equations, theoretically analysing them within...... iteration and the Levenberg-Marquardt method are employed for solving the problems. The first problem considered in this thesis is a problem of conductivity estimation from interior measurements of the power density, known as Acousto-Electrical Tomography. A special case of limited angle tomography...

  12. The use of hamstring tendon graft for the anterior cruciate ligament reconstruction (benefi ts, problems and their solutions

    Directory of Open Access Journals (Sweden)

    V. V. Slastinin

    2017-01-01

    Full Text Available The search for optimal graft for anterior cruciate ligament reconstruction is going on. The donor site morbidity remains one of the major problems when using autografts. The article provides an overview of the advantages and disadvantages of using the hamstring tendon autografts for anterior cruciate ligament reconstruction, and the ways of solving the problems associated with using such types of grafts.

  13. Accurate Reconstruction of the Roman Circus in Milan by Georeferencing Heterogeneous Data Sources with GIS

    Directory of Open Access Journals (Sweden)

    Gabriele Guidi

    2017-09-01

    Full Text Available This paper presents the methodological approach and the actual workflow for creating the 3D digital reconstruction in time of the ancient Roman Circus of Milan, which is presently covered completely by the urban fabric of the modern city. The diachronic reconstruction is based on a proper mix of quantitative data originated by current 3D surveys and historical sources, such as ancient maps, drawings, archaeological reports, restrictions decrees, and old photographs. When possible, such heterogeneous sources have been georeferenced and stored in a GIS system. In this way the sources have been analyzed in depth, allowing the deduction of geometrical information not explicitly revealed by the material available. A reliable reconstruction of the area in different historical periods has been therefore hypothesized. This research has been carried on in the framework of the project Cultural Heritage Through Time—CHT2, funded by the Joint Programming Initiative on Cultural Heritage (JPI-CH, supported by the Italian Ministry for Cultural Heritage (MiBACT, the Italian Ministry for University and Research (MIUR, and the European Commission.

  14. Inverse source problems for eddy current equations

    International Nuclear Information System (INIS)

    Rodríguez, Ana Alonso; Valli, Alberto; Camaño, Jessika

    2012-01-01

    We study the inverse source problem for the eddy current approximation of Maxwell equations. As for the full system of Maxwell equations, we show that a volume current source cannot be uniquely identified by knowledge of the tangential components of the electromagnetic fields on the boundary, and we characterize the space of non-radiating sources. On the other hand, we prove that the inverse source problem has a unique solution if the source is supported on the boundary of a subdomain or if it is the sum of a finite number of dipoles. We address the applicability of this result for the localization of brain activity from electroencephalography and magnetoencephalography measurements. (paper)

  15. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva

    2013-07-11

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously through minimizing a single penalized least squares criterion. The novelty of our methodology is the simultaneous consideration of three desirable properties of the reconstructed source signals, that is, spatial focality, spatial smoothness, and temporal smoothness. The desirable properties are achieved by using three separate penalty functions in the penalized regression framework. Specifically, we impose a roughness penalty in the temporal domain for temporal smoothness, and a sparsity-inducing penalty and a graph Laplacian penalty in the spatial domain for spatial focality and smoothness. We develop a computational efficient multilevel block coordinate descent algorithm to implement the method. Using a simulation study with several settings of different spatial complexity and two real MEG examples, we show that the proposed method outperforms existing methods that use only a subset of the three penalty functions. © 2013 Springer Science+Business Media New York.

  16. Noise-tolerance analysis for detection and reconstruction of absorbing inhomogeneities with diffuse optical tomography using single- and phase-correlated dual-source schemes

    International Nuclear Information System (INIS)

    Kanmani, B; Vasu, R M

    2007-01-01

    An iterative reconstruction procedure is used to invert intensity data from both single- and phase-correlated dual-source illuminations for absorption inhomogeneities. The Jacobian for the dual source is constructed by an algebraic addition of the Jacobians estimated for the two sources separately. By numerical simulations, it is shown that the dual-source scheme performs superior to the single-source system in regard to (i) noise tolerance in data and (ii) ability to reconstruct smaller and lower contrast objects. The quality of reconstructions from single-source data, as indicated by mean-square error at convergence, is markedly poorer compared to their dual-source counterpart, when noise in data was in excess of 2%. With fixed contrast and decreasing inhomogeneity diameter, our simulations showed that, for diameters below 7 mm, the dual-source scheme has a higher percentage contrast recovery compared to the single-source scheme. Similarly, the dual-source scheme reconstructs to a higher percentage contrast recovery from lower contrast inhomogeneity, in comparison to the single-source scheme

  17. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  18. Reconstruction of Chernobyl source parameters using gamma dose rate measurements in town Pripjat

    Directory of Open Access Journals (Sweden)

    M. M. Talerko

    2010-06-01

    Full Text Available With the help of mathematical modeling of atmospheric transport the calculations of accidental release dispersion from the Chernobyl NPP to town Pripjat during period from 26 till 29 April 1986 have been carried out. Data of gamma rate measurements which was made in 31 points of the town were used. Based on the solution of atmospheric transport inverse problem the reconstruction of Chernobyl source parameters has been made including release intensity and effective source height. The input of main dose-forming radionuclides into the exposure dose during the first 40 hours after the accident (the period of population residence in the town before the evacuation has been estimated. According to the calculations the 131I deposition density averaged over the town territory was about 5.2 × 104 kBq/m2 (on 29.04.86. Minimum and maximum 131I deposition values were 2.8 × 104 kBq/m2 (western part, distance to the unit is 4.5 km and 1.2 × 105 kBq/m2 (north-eastern part of town, 2 km from the unit accordingly. For the moment of the evacuation dated April 27, deposition values were about 90 percent of these values.

  19. 40 CFR Table 1 to Subpart Oooo of... - Emission Limits for New or Reconstructed and Existing Affected Sources in the Printing, Coating...

    Science.gov (United States)

    2010-07-01

    ... Reconstructed and Existing Affected Sources in the Printing, Coating and Dyeing of Fabrics and Other Textiles... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants: Printing, Coating, and Dyeing...—Emission Limits for New or Reconstructed and Existing Affected Sources in the Printing, Coating and Dyeing...

  20. Dynamic MRI reconstruction as a moment problem. Pt. 1

    International Nuclear Information System (INIS)

    Zwaan, M.

    1989-03-01

    This paper deals with some mathematical aspects of magnetic resonance imaging (MRI) concerning the beating heart. Some of the basic theory behind magnetic resonance is given. Of special interest is the mathematical theory concerning MRI and the ideas and problems in mathematical terms will be formulated. If one uses MRI to measure and display a so colled 'dynamic' organ, like the beating heart, the situation is more complex than the case of a static organ. Strategy is described how a cross section of a beating human heart is measured in practice and how the measurements are arranged before an image can be made. This technique is called retrospective synchronization. If the beating heart is measured and displayed with help of this method, artefacts often deteriorate the image quality. Some of these artefacts have a physical cause, while others are caused by the reconstruction algorithm. Perhaps mathematical techniques may be used to improve these algorithms hich are currently used in practice. The aim of this paper is not to solve problems, but to give an adequate mathematical formulation of the inversion problem concerning retrospective synchronization. (author). 3 refs.; 4 figs

  1. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    International Nuclear Information System (INIS)

    Tohme, Michel S; Qi Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 x 3 line phantom, an ultra-micro resolution phantom and a 22 Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  2. MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.

    Science.gov (United States)

    Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S

    2011-01-01

    NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.

  3. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing

    OpenAIRE

    Cohen, M.X.; Ridderinkhof, K.R.

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (?200 ms post-stimulus) conflict modulation in ...

  4. Some statistical problems inherent in radioactive-source detection

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1978-01-01

    Some of the statistical questions associated with problems of detecting random-point-process signals embedded in random-point-process noise are examined. An example of such a problem is that of searching for a lost radioactive source with a moving detection system. The emphasis is on theoretical questions, but some experimental and Monte Carlo results are used to test the theoretical results. Several idealized binary decision problems are treated by starting with simple, specific situations and progressing toward more general problems. This sequence of decision problems culminates in the minimum-cost-expectation rule for deciding between two Poisson processes with arbitrary intensity functions. As an example, this rule is then specialized to the detector-passing-a-point-source decision problem. Finally, Monte Carlo techniques are used to develop and test one estimation procedure: the maximum-likelihood estimation of a parameter in the intensity function of a Poisson process. For the Monte Carlo test this estimation procedure is specialized to the detector-passing-a-point-source case. Introductory material from probability theory is included so as to make the report accessible to those not especially conversant with probabilistic concepts and methods. 16 figures

  5. A variational study on BRDF reconstruction in a structured light scanner

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Stets, Jonathan Dyssel; Lyngby, Rasmus Ahrenkiel

    2017-01-01

    Time-efficient acquisition of reflectance behavior together with surface geometry is a challenging problem. In this study, we investigate the impact of system parameter uncertainties when incorporating a data-driven BRDF reconstruction approach into the standard pipeline of a structured light...... setup. Results show that while uncertainties in vertex positions and normals have a high impact on the quality of reconstructed BRDFs, object geometry and light source properties have very little influence on the reconstructed BRDFs. With this analysis, practitioners now have insight in the tolerances...... required for accurate BRDF acquisition to work....

  6. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  7. Monte Carlo source convergence and the Whitesides problem

    International Nuclear Information System (INIS)

    Blomquist, R. N.

    2000-01-01

    The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems

  8. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    Science.gov (United States)

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric

  9. Reconstruction and modernization of Novi Han radioactive waste repository

    International Nuclear Information System (INIS)

    Kolev, I.; Dralchev, D.; Spasov, P.; Jordanov, M.

    2000-01-01

    This report presents briefly the most important issues of the study performed by EQE - Bulgaria. The objectives of the study are the development of conceptual solutions for construction of the following facilities in the Novi Han radioactive waste repository: an operational storage for unconditioned high level spent sources; new temporary buildings over the existing radioactive waste storage facilities; a rain-water draining system ect. The study also includes the engineering solutions for conservation of the existing facilities, currently full with high level spent sources. A 'Program for reconstruction and modernization' has been created, including the analysis of some regulation aspects concerning this program implementation. In conclusions the engineering problems of Novi Han repository are clear and appropriate solutions are available. They can be implemented in both cases of 'small' or 'large' reconstruction. The reconstruction project anyway should start with the construction of a new site infrastructure. Reconstruction and modernization of Novi Han radioactive waste repository is the only way to improve the management and safety of radioactive waste from medicine, industry and scientific research in Bulgaria

  10. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  11. Dual-source CT coronary imaging in heart transplant recipients: image quality and optimal reconstruction interval

    International Nuclear Information System (INIS)

    Bastarrika, Gorka; Arraiza, Maria; Pueyo, Jesus C.; Cecco, Carlo N. de; Ubilla, Matias; Mastrobuoni, Stefano; Rabago, Gregorio

    2008-01-01

    The image quality and optimal reconstruction interval for coronary arteries in heart transplant recipients undergoing non-invasive dual-source computed tomography (DSCT) coronary angiography was evaluated. Twenty consecutive heart transplant recipients who underwent DSCT coronary angiography were included (19 male, one female; mean age 63.1±10.7 years). Data sets were reconstructed in 5% steps from 30% to 80% of the R-R interval. Two blinded independent observers assessed the image quality of each coronary segments using a five-point scale (from 0 = not evaluative to 4=excellent quality). A total of 289 coronary segments in 20 heart transplant recipients were evaluated. Mean heart rate during the scan was 89.1±10.4 bpm. At the best reconstruction interval, diagnostic image quality (score ≥2) was obtained in 93.4% of the coronary segments (270/289) with a mean image quality score of 3.04± 0.63. Systolic reconstruction intervals provided better image quality scores than diastolic reconstruction intervals (overall mean quality scores obtained with the systolic and diastolic reconstructions 3.03±1.06 and 2.73±1.11, respectively; P<0.001). Different systolic reconstruction intervals (35%, 40%, 45% of RR interval) did not yield to significant differences in image quality scores for the coronary segments (P=0.74). Reconstructions obtained at the systolic phase of the cardiac cycle allowed excellent diagnostic image quality coronary angiograms in heart transplant recipients undergoing DSCT coronary angiography. (orig.)

  12. Solution to the inversely stated transient source-receptor problem

    International Nuclear Information System (INIS)

    Sajo, E.; Sheff, J.R.

    1995-01-01

    Transient source-receptor problems are traditionally handled via the Boltzmann equation or through one of its variants. In the atmospheric transport of pollutants, meteorological uncertainties in the planetary boundary layer render only a few approximations to the Boltzmann equation useful. Often, due to the high number of unknowns, the atmospheric source-receptor problem is ill-posed. Moreover, models to estimate downwind concentration invariably assume that the source term is known. In this paper, an inverse methodology is developed, based on downwind measurement of concentration and that of meterological parameters to estimate the source term

  13. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  14. Jane: a new tool for the cophylogeny reconstruction problem.

    Science.gov (United States)

    Conow, Chris; Fielder, Daniel; Ovadia, Yaniv; Libeskind-Hadas, Ran

    2010-02-03

    This paper describes the theory and implementation of a new software tool, called Jane, for the study of historical associations. This problem arises in parasitology (associations of hosts and parasites), molecular systematics (associations of orderings and genes), and biogeography (associations of regions and orderings). The underlying problem is that of reconciling pairs of trees subject to biologically plausible events and costs associated with these events. Existing software tools for this problem have strengths and limitations, and the new Jane tool described here provides functionality that complements existing tools. The Jane software tool uses a polynomial time dynamic programming algorithm in conjunction with a genetic algorithm to find very good, and often optimal, solutions even for relatively large pairs of trees. The tool allows the user to provide rich timing information on both the host and parasite trees. In addition the user can limit host switch distance and specify multiple host switch costs by specifying regions in the host tree and costs for host switches between pairs of regions. Jane also provides a graphical user interface that allows the user to interactively experiment with modifications to the solutions found by the program. Jane is shown to be a useful tool for cophylogenetic reconstruction. Its functionality complements existing tools and it is therefore likely to be of use to researchers in the areas of parasitology, molecular systematics, and biogeography.

  15. Analytical reconstruction schemes for coarse-mesh spectral nodal solution of slab-geometry SN transport problems

    International Nuclear Information System (INIS)

    Barros, R. C.; Filho, H. A.; Platt, G. M.; Oliveira, F. B. S.; Militao, D. S.

    2009-01-01

    Coarse-mesh numerical methods are very efficient in the sense that they generate accurate results in short computational time, as the number of floating point operations generally decrease, as a result of the reduced number of mesh points. On the other hand, they generate numerical solutions that do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. In this paper we describe two analytical reconstruction schemes for the coarse-mesh solution generated by the spectral nodal method for neutral particle discrete ordinates (S N ) transport model in slab geometry. The first scheme we describe is based on the analytical reconstruction of the coarse-mesh solution within each discretization cell of the spatial grid set up on the slab. The second scheme is based on the angular reconstruction of the discrete ordinates solution between two contiguous ordinates of the angular quadrature set used in the S N model. Numerical results are given so we can illustrate the accuracy of the two reconstruction schemes, as described in this paper. (authors)

  16. Stable methods for ill-posed problems and application to reconstruction of atmospheric temperature profile

    International Nuclear Information System (INIS)

    Son, H.H.; Luong, P.T.; Loan, N.T.

    1990-04-01

    The problems of Remote Sensing (passive or active) are investigated on the base of main principle which consists in interpretation of radiometric electromagnetic measurements in such spectral interval where the radiation is sensitive to interested physical property of medium. Those problems such as an analysis of composition and structure of atmosphere using the records of scattered radiation, cloud identification, investigation of thermodynamic state and composition of system, reconstructing the atmospheric temperature profile on the base of data processing of infrared radiation emitted by system Earth-Atmosphere... belong to class of inverse problems of mathematical physics which are often incorrect. Int his paper a new class of regularized solution corresponding to general formulated RATP-problem is considered. (author). 14 refs, 3 figs, 3 tabs

  17. Exact iterative reconstruction for the interior problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Gullberg, Grant T

    2009-01-01

    There is a trend in single photon emission computed tomography (SPECT) that small and dedicated imaging systems are becoming popular. For example, many companies are developing small dedicated cardiac SPECT systems with different designs. These dedicated systems have a smaller field of view (FOV) than a full-size clinical system. Thus data truncation has become the norm rather than the exception in these systems. Therefore, it is important to develop region of interest (ROI) reconstruction algorithms using truncated data. This paper is a stepping stone toward this direction. This paper shows that the common generic iterative image reconstruction algorithms are able to exactly reconstruct the ROI under the conditions that the convex ROI is fully sampled and the image value in a sub-region within the ROI is known. If the ROI includes a sub-region that is outside the patient body, then the conditions can be easily satisfied.

  18. The completeness condition and source orbits for exact image reconstruction in 3D cone-beam CT

    International Nuclear Information System (INIS)

    Mao Xiping; Kang Kejun

    1997-01-01

    The completeness condition for exact image reconstruction in 3D cone-beam CT are carefully analyzed in theory, and discussions about some source orbits which fulfill the completeness condition are followed

  19. Comparison of 3D reconstruction of mandible for pre-operative planning using commercial and open-source software

    Science.gov (United States)

    Abdullah, Johari Yap; Omar, Marzuki; Pritam, Helmi Mohd Hadi; Husein, Adam; Rajion, Zainul Ahmad

    2016-12-01

    3D printing of mandible is important for pre-operative planning, diagnostic purposes, as well as for education and training. Currently, the processing of CT data is routinely performed with commercial software which increases the cost of operation and patient management for a small clinical setting. Usage of open-source software as an alternative to commercial software for 3D reconstruction of the mandible from CT data is scarce. The aim of this study is to compare two methods of 3D reconstruction of the mandible using commercial Materialise Mimics software and open-source Medical Imaging Interaction Toolkit (MITK) software. Head CT images with a slice thickness of 1 mm and a matrix of 512x512 pixels each were retrieved from the server located at the Radiology Department of Hospital Universiti Sains Malaysia. The CT data were analysed and the 3D models of mandible were reconstructed using both commercial Materialise Mimics and open-source MITK software. Both virtual 3D models were saved in STL format and exported to 3matic and MeshLab software for morphometric and image analyses. Both models were compared using Wilcoxon Signed Rank Test and Hausdorff Distance. No significant differences were obtained between the 3D models of the mandible produced using Mimics and MITK software. The 3D model of the mandible produced using MITK open-source software is comparable to the commercial MIMICS software. Therefore, open-source software could be used in clinical setting for pre-operative planning to minimise the operational cost.

  20. Smartphones Get Emotional: Mind Reading Images and Reconstructing the Neural Sources

    DEFF Research Database (Denmark)

    Petersen, Michael Kai; Stahlhut, Carsten; Stopczynski, Arkadiusz

    2011-01-01

    components across subjects we are able to remove artifacts and identify common sources of synchronous brain activity, consistent with earlier ndings based on conventional EEG equipment. Applying a Bayesian approach to reconstruct the neural sources not only facilitates dierentiation of emotional responses...... but may also provide an intuitive interface for interacting with a 3D rendered model of brain activity. Integrating a wireless EEG set with a smartphone thus offers completely new opportunities for modeling the mental state of users as well as providing a basis for novel bio-feedback applications.......Combining a 14 channel neuroheadset with a smartphone to capture and process brain imaging data, we demonstrate the ability to distinguish among emotional responses re ected in dierent scalp potentials when viewing pleasant and unpleasant pictures compared to neutral content. Clustering independent...

  1. Jane: a new tool for the cophylogeny reconstruction problem

    Directory of Open Access Journals (Sweden)

    Ovadia Yaniv

    2010-02-01

    Full Text Available Abstract Background This paper describes the theory and implementation of a new software tool, called Jane, for the study of historical associations. This problem arises in parasitology (associations of hosts and parasites, molecular systematics (associations of orderings and genes, and biogeography (associations of regions and orderings. The underlying problem is that of reconciling pairs of trees subject to biologically plausible events and costs associated with these events. Existing software tools for this problem have strengths and limitations, and the new Jane tool described here provides functionality that complements existing tools. Results The Jane software tool uses a polynomial time dynamic programming algorithm in conjunction with a genetic algorithm to find very good, and often optimal, solutions even for relatively large pairs of trees. The tool allows the user to provide rich timing information on both the host and parasite trees. In addition the user can limit host switch distance and specify multiple host switch costs by specifying regions in the host tree and costs for host switches between pairs of regions. Jane also provides a graphical user interface that allows the user to interactively experiment with modifications to the solutions found by the program. Conclusions Jane is shown to be a useful tool for cophylogenetic reconstruction. Its functionality complements existing tools and it is therefore likely to be of use to researchers in the areas of parasitology, molecular systematics, and biogeography.

  2. On Inverse Coefficient Heat-Conduction Problems on Reconstruction of Nonlinear Components of the Thermal-Conductivity Tensor of Anisotropic Bodies

    Science.gov (United States)

    Formalev, V. F.; Kolesnik, S. A.

    2017-11-01

    The authors are the first to present a closed procedure for numerical solution of inverse coefficient problems of heat conduction in anisotropic materials used as heat-shielding ones in rocket and space equipment. The reconstructed components of the thermal-conductivity tensor depend on temperature (are nonlinear). The procedure includes the formation of experimental data, the implicit gradient-descent method, the economical absolutely stable method of numerical solution of parabolic problems containing mixed derivatives, the parametric identification, construction, and numerical solution of the problem for elements of sensitivity matrices, the development of a quadratic residual functional and regularizing functionals, and also the development of algorithms and software systems. The implicit gradient-descent method permits expanding the quadratic functional in a Taylor series with retention of the linear terms for the increments of the sought functions. This substantially improves the exactness and stability of solution of the inverse problems. Software systems are developed with account taken of the errors in experimental data and disregarding them. On the basis of a priori assumptions of the qualitative behavior of the functional dependences of the components of the thermal-conductivity tensor on temperature, regularizing functionals are constructed by means of which one can reconstruct the components of the thermal-conductivity tensor with an error no higher than the error of the experimental data. Results of the numerical solution of the inverse coefficient problems on reconstruction of nonlinear components of the thermal-conductivity tensor have been obtained and are discussed.

  3. Simulating variable source problems via post processing of individual particle tallies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source for optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source

  4. Source reconstruction using phase space beam summation technique

    International Nuclear Information System (INIS)

    Graubart, Gideon.

    1990-10-01

    In this work, the phase-space beam summation technique (PSBS), is applied to back propagation and inverse source problems. The PSBS expresses the field as a superposition of shifted and tilted beams. This phase space spectrum of beams is matched to the source distribution via an amplitude function which expresses the local spectrum of the source function in terms of a local Fourier transform. In this work, the emphasis is on the phase space processing of the data, on the information content of this data and on the back propagation scheme. More work is still required to combine this back propagation approach in a full, multi experiment inverse scattering scheme. It is shown that the phase space distribution of the data, computed via the local spectrum transform, is localized along lines that define the local arrival direction of the wave data. We explore how the choice of the beam width affects the compactification of this distribution, and derive criteria for choosing a window that optimizes this distribution. It should be emphasized that compact distribution implies fewer beams in the back propagation scheme and therefore higher numerical efficiency and better physical insight. Furthermore it is shown how the local information property of the phase space representation can be used to improve the performance of this simple back propagation problem, in particular with regard to axial resolution; the distance to the source can be determined by back propagating only the large angle phase space beams that focus on the source. The information concerning transverse distribution of the source, on the other hand, is contained in the axial phase space region and can therefore be determined by the corresponding back propagating beams. Because of the global nature of the plane waves propagators the conventional plane wave back propagation scheme does not have the same 'focusing' property, and therefore suffers from lack of information localization and axial resolution. The

  5. Developing milk industry estimates for dose reconstruction projects

    International Nuclear Information System (INIS)

    Beck, D.M.; Darwin, R.F.

    1991-01-01

    One of the most important contributors to radiation doses from hanford during the 1944-1947 period was radioactive iodine. Consumption of milk from cows that ate vegetation contaminated with iodine is likely the dominant pathway of human exposure. To estimate the doses people could have received from this pathway, it is necessary to reconstruct the amount of milk consumed by people living near Hanford, the source of the milk, and the type of feed that the milk cows ate. This task is challenging because the dairy industry has undergone radical changes since the end of World War 2, and records that document the impact of these changes on the study area are scarce. Similar problems are faced by researchers on most dose reconstruction efforts. The purpose of this work is to document and evaluate the methods used on the Hanford Environmental Dose Reconstruction (HEDR) Project to reconstruct the milk industry and to present preliminary results

  6. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  7. Direct and inverse problems of infrared tomography

    DEFF Research Database (Denmark)

    Sizikov, Valery S.; Evseev, Vadim; Fateev, Alexander

    2016-01-01

    The problems of infrared tomography-direct (the modeling of measured functions) and inverse (the reconstruction of gaseous medium parameters)-are considered with a laboratory burner flame as an example of an application. The two measurement modes are used: active (ON) with an external IR source...

  8. BoneSource hydroxyapatite cement: a novel biomaterial for craniofacial skeletal tissue engineering and reconstruction.

    Science.gov (United States)

    Friedman, C D; Costantino, P D; Takagi, S; Chow, L C

    1998-01-01

    BoneSource-hydroxyapatite cement is a new self-setting calcium phosphate cement biomaterial. Its unique and innovative physical chemistry coupled with enhanced biocompatibility make it useful for craniofacial skeletal reconstruction. The general properties and clinical use guidelines are reviewed. The biomaterial and surgical applications offer insight into improved outcomes and potential new uses for hydroxyapatite cement systems.

  9. Full field image reconstruction is suitable for high-pitch dual-source computed tomography.

    Science.gov (United States)

    Mahnken, Andreas H; Allmendinger, Thomas; Sedlmair, Martin; Tamm, Miriam; Reinartz, Sebastian D; Flohr, Thomas

    2012-11-01

    The field of view (FOV) in high-pitch dual-source computed tomography (DSCT) is limited by the size of the second detector. The goal of this study was to develop and evaluate a full FOV image reconstruction technique for high-pitch DSCT. For reconstruction beyond the FOV of the second detector, raw data of the second system were extended to the full dimensions of the first system, using the partly existing data of the first system in combination with a very smooth transition weight function. During the weighted filtered backprojection, the data of the second system were applied with an additional weighting factor. This method was tested for different pitch values from 1.5 to 3.5 on a simulated phantom and on 25 high-pitch DSCT data sets acquired at pitch values of 1.6, 2.0, 2.5, 2.8, and 3.0. Images were reconstructed with FOV sizes of 260 × 260 and 500 × 500 mm. Image quality was assessed by 2 radiologists using a 5-point Likert scale and analyzed with repeated-measure analysis of variance. In phantom and patient data, full FOV image quality depended on pitch. Where complete projection data from both tube-detector systems were available, image quality was unaffected by pitch changes. Full FOV image quality was not compromised at pitch values of 1.6 and remained fully diagnostic up to a pitch of 2.0. At higher pitch values, there was an increasing difference in image quality between limited and full FOV images (P = 0.0097). With this new image reconstruction technique, full FOV image reconstruction can be used up to a pitch of 2.0.

  10. UV Reconstruction Algorithm And Diurnal Cycle Variability

    Science.gov (United States)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  11. Vertex reconstruction in CMS

    International Nuclear Information System (INIS)

    Chabanat, E.; D'Hondt, J.; Estre, N.; Fruehwirth, R.; Prokofiev, K.; Speer, T.; Vanlaer, P.; Waltenberger, W.

    2005-01-01

    Due to the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ('vertex finding') and an estimation problem ('vertex fitting'). Starting from least-squares methods, robustifications of the classical algorithms are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels

  12. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    Science.gov (United States)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  13. Brachytherapy reconstruction using orthogonal scout views from the CT

    International Nuclear Information System (INIS)

    Perez, J.; Lliso, F.; Carmona, V.; Bea, J.; Tormo, A.; Petschen, I.

    1996-01-01

    Introduction: CT assisted brachytherapy planning is demonstrating to have great advantages as external RT planning does. One of the problems we have found in this approach with the conventional gynecological Fletcher applicators is the high amount of artefacts (ovoids with rectal and vessical protections) in the CT slice. We have introduced a reconstruction method based on scout views in order to avoid this problem, allowing us to perform brachytherapy reconstruction completely CT assisted. We use a virtual simulation chain by General Electric Medical Systems. Method and discussion: Two orthogonal scout views (0 and 90 tube positions) are performed. The reconstruction method takes into account the virtual position of the focus and the fact that there is only divergence in the transverse plane. Algorithms developed for sources as well as for reference points localisation (A, B, lymphatic Fletcher trapezoid, pelvic wall, etc.) are presented. This method has the following practical advantages: the porte-cassette is not necessary, the image quality can be improved (it is very helpful in pelvic lateral views that are critical in conventional radiographs), the total time to get the data is smaller than for conventional radiographs (reduction of patient motion effects) and problems that appear in CT-slice based reconstruction in the case of strongly curved intrauterine applicators are avoided. Even though the resolution is smaller than in conventional radiographs it is good enough for brachytherapy. Regarding the CT planning this method presents the interesting feature that the co-ordinate system is the same for the reconstruction process that for the CT-slices set. As the application can be reconstructed from scout views and the doses can be evaluated on CT slices it is easier to correlate the dose values obtained for the traditional points with those provided by the CT information

  14. Vertex Reconstruction in CMS

    CERN Document Server

    Chabanat, E; D'Hondt, J; Vanlaer, P; Prokofiev, K; Speer, T; Frühwirth, R; Waltenberger, W

    2005-01-01

    Because of the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ("vertex finding") and an estimation problem ("vertex fitting"). Starting from least-square methods, ways to render the classical algorithms more robust are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels.

  15. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  16. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    Science.gov (United States)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  17. Digital Reconstruction of AN Archaeological Site Based on the Integration of 3d Data and Historical Sources

    Science.gov (United States)

    Guidi, G.; Russo, M.; Angheleddu, D.

    2013-02-01

    The methodology proposed in this paper in based on an integrated approach for creating a 3D digital reconstruction of an archaeological site, using extensively the 3D documentation of the site in its current state, followed by an iterative interaction between archaeologists and digital modelers, leading to a progressive refinement of the reconstructive hypotheses. The starting point of the method is the reality-based model, which, together with ancient drawings and documents, is used for generating the first reconstructive step. Such rough approximation of a possible architectural structure can be annotated through archaeological considerations that has to be confronted with geometrical constraints, producing a reduction of the reconstructive hypotheses to a limited set, each one to be archaeologically evaluated. This refinement loop on the reconstructive choices is iterated until the result become convincing by both points of view, integrating in the best way all the available sources. The proposed method has been verified on the ruins of five temples in the My Son site, a wide archaeological area located in central Vietnam. The integration of 3D surveyed data and historical documentation has allowed to support a digital reconstruction of not existing architectures, developing their three-dimensional digital models step by step, from rough shapes to highly sophisticate virtual prototypes.

  18. Dynamic dual-tracer PET reconstruction.

    Science.gov (United States)

    Gao, Fei; Liu, Huafeng; Jian, Yiqiang; Shi, Pengcheng

    2009-01-01

    Although of important medical implications, simultaneous dual-tracer positron emission tomography reconstruction remains a challenging problem, primarily because the photon measurements from dual tracers are overlapped. In this paper, we propose a simultaneous dynamic dual-tracer reconstruction of tissue activity maps based on guidance from tracer kinetics. The dual-tracer reconstruction problem is formulated in a state-space representation, where parallel compartment models serve as continuous-time system equation describing the tracer kinetic processes of dual tracers, and the imaging data is expressed as discrete sampling of the system states in measurement equation. The image reconstruction problem has therefore become a state estimation problem in a continuous-discrete hybrid paradigm, and H infinity filtering is adopted as the estimation strategy. As H infinity filtering makes no assumptions on the system and measurement statistics, robust reconstruction results can be obtained for the dual-tracer PET imaging system where the statistical properties of measurement data and system uncertainty are not available a priori, even when there are disturbances in the kinetic parameters. Experimental results on digital phantoms, Monte Carlo simulations and physical phantoms have demonstrated the superior performance.

  19. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei; Wonka, Peter; Nan, Liangliang

    2016-01-01

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  20. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei

    2016-09-16

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  1. Inverse source problems in elastodynamics

    Science.gov (United States)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  2. Multi-sheet surface rebinning methods for reconstruction from asymmetrically truncated cone beam projections: I. Approximation and optimality

    International Nuclear Information System (INIS)

    Betcke, Marta M; Lionheart, William R B

    2013-01-01

    The mechanical motion of the gantry in conventional cone beam CT scanners restricts the speed of data acquisition in applications with near real time requirements. A possible resolution of this problem is to replace the moving source detector assembly with static parts that are electronically activated. An example of such a system is the Rapiscan Systems RTT80 real time tomography scanner, with a static ring of sources and axially offset static cylinder of detectors. A consequence of such a design is asymmetrical axial truncation of the cone beam projections resulting, in the sense of integral geometry, in severely incomplete data. In particular we collect data only in a fraction of the Tam–Danielsson window, hence the standard cone beam reconstruction techniques do not apply. In this work we propose a family of multi-sheet surface rebinning methods for reconstruction from such truncated projections. The proposed methods combine analytical and numerical ideas utilizing linearity of the ray transform to reconstruct data on multi-sheet surfaces, from which the volumetric image is obtained through deconvolution. In this first paper in the series, we discuss the rebinning to multi-sheet surfaces. In particular we concentrate on the underlying transforms on multi-sheet surfaces and their approximation with data collected by offset multi-source scanning geometries like the RTT. The optimal multi-sheet surface and the corresponding rebinning function are found as a solution of a variational problem. In the case of the quadratic objective, the variational problem for the optimal rebinning pair can be solved by a globally convergent iteration. Examples of optimal rebinning pairs are computed for different trajectories. We formulate the axial deconvolution problem for the recovery of the volumetric image from the reconstructions on multi-sheet surfaces. Efficient and stable solution of the deconvolution problem is the subject of the second paper in this series (Betcke and

  3. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    Energy Technology Data Exchange (ETDEWEB)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu [National Radio Astronomy Observatory, Socorro, NM-87801 (United States)

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.

  4. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    International Nuclear Information System (INIS)

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  5. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  6. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    Energy Technology Data Exchange (ETDEWEB)

    Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de [Institute for Computational and Applied Mathematics, University of Münster, Einsteinstrasse 62, D-48149 Münster (Germany); Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT (United Kingdom); Brookes, Mike [Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT (United Kingdom); Rimpiläinen, Ville [Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, D-48149 Münster (Germany); Department of Mathematics, University of Auckland, Private bag 92019, Auckland 1142 (New Zealand)

    2017-01-15

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole

  7. Direct and inverse source problems for a space fractional advection dispersion equation

    KAUST Repository

    Aldoghaither, Abeer

    2016-05-15

    In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic solution to the direct problem which we use to prove the uniqueness and the unstability of the inverse source problem using final measurements. Finally, we illustrate the results with a numerical example.

  8. Source Location of Noble Gas Plumes

    International Nuclear Information System (INIS)

    Hoffman, I.; Ungar, K.; Bourgouin, P.; Yee, E.; Wotawa, G.

    2015-01-01

    In radionuclide monitoring, one of the most significant challenges from a verification or surveillance perspective is the source location problem. Modern monitoring/surveillance systems employ meteorological source reconstruction — for example, the Fukushima accident, CRL emissions analysis and even radon risk mapping. These studies usually take weeks to months to conduct, involving multidisciplinary teams representing meteorology; dispersion modelling; radionuclide sampling and metrology; and, when relevant, proper representation of source characteristics (e.g., reactor engineering expertise). Several different approaches have been tried in an attempt to determine useful techniques to apply to the source location problem and to develop rigorous methods that combine all potentially relevant observations and models to identify a most probable source location and size with uncertainties. The ultimate goal is to understand the utility and limitations of these techniques so they can transition from R&D to operational tools. (author)

  9. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  10. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  11. Radiation protection problems with sealed Pu radiation sources

    International Nuclear Information System (INIS)

    Naumann, M.; Wels, C.

    1982-01-01

    A brief outline of the production methods and most important properties of Pu-238 and Pu-239 is given, followed by an overview of possibilities for utilizing the different types of radiation emitted, a description of problems involved in the safe handling of Pu radiation sources, and an assessment of the design principles for Pu-containing alpha, photon, neutron and energy sources from the radiation protection point of view. (author)

  12. Three-dimensional tomosynthetic image restoration for brachytherapy source localization

    International Nuclear Information System (INIS)

    Persons, Timothy M.

    2001-01-01

    Tomosynthetic image reconstruction allows for the production of a virtually infinite number of slices from a finite number of projection views of a subject. If the reconstructed image volume is viewed in toto, and the three-dimensional (3D) impulse response is accurately known, then it is possible to solve the inverse problem (deconvolution) using canonical image restoration methods (such as Wiener filtering or solution by conjugate gradient least squares iteration) by extension to three dimensions in either the spatial or the frequency domains. This dissertation presents modified direct and iterative restoration methods for solving the inverse tomosynthetic imaging problem in 3D. The significant blur artifact that is common to tomosynthetic reconstructions is deconvolved by solving for the entire 3D image at once. The 3D impulse response is computed analytically using a fiducial reference schema as realized in a robust, self-calibrating solution to generalized tomosynthesis. 3D modulation transfer function analysis is used to characterize the tomosynthetic resolution of the 3D reconstructions. The relevant clinical application of these methods is 3D imaging for brachytherapy source localization. Conventional localization schemes for brachytherapy implants using orthogonal or stereoscopic projection radiographs suffer from scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking (reported errors: 2-4 mm) and dosimetric inaccuracy. 3D image reconstruction (using a well-chosen projection sampling scheme) and restoration of a prostate brachytherapy phantom is used for testing. The approaches presented in this work localize source centroids with submillimeter error in two Cartesian dimensions and just over one millimeter error in the third

  13. Point-source reconstruction with a sparse light-sensor array for optical TPC readout

    International Nuclear Information System (INIS)

    Rutter, G; Richards, M; Bennieston, A J; Ramachers, Y A

    2011-01-01

    A reconstruction technique for sparse array optical signal readout is introduced and applied to the generic challenge of large-area readout of a large number of point light sources. This challenge finds a prominent example in future, large volume neutrino detector studies based on liquid argon. It is concluded that the sparse array option may be ruled out for reasons of required number of channels when compared to a benchmark derived from charge readout on wire-planes. Smaller-scale detectors, however, could benefit from this technology.

  14. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    DEFF Research Database (Denmark)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...

  15. Nature and magnitude of the problem of spent radiation sources

    International Nuclear Information System (INIS)

    1991-09-01

    Various types of sealed radiation sources are widely used in industry, medicine and research. Virtually all countries have some sealed sources. The activity in the sources varies from kilobecquerels in consumer products to hundreds of pentabecquerels in facilities for food irradiation. Loss or misuse of sealed sources can give rise to accidents resulting in radiation exposure of workers and members of the general public, and can also give rise to extensive contamination of land, equipment and buildings. In extreme cases the exposure can be lethal. Problems of safety relating to spent radiation sources have been under consideration within the Agency for some years. The first objective of the project has been to prepare a comprehensive report reviewing the nature and background of the problem, also giving an overview of existing practices for the management of spent radiation sources. This report is the fulfilment of this first objective. The safe management of spent radiation sources cannot be studied in isolation from their normal use, so it has been necessary to include some details which are relevant to the use of radiation sources in general, although that area is outside the scope of this report. The report is limited to radiation sources made up of radioactive material. The Agency is implementing a comprehensive action plan for assistance to Member States, especially the developing countries, in all aspects of the safe management of spent radiation sources. The Agency is further seeking to establish regional or global solutions to the problems of long-term storage of spent radiation sources, as well as finding routes for the disposal of sources when it is not feasible to set up safe national solutions. The cost of remedial actions after an accident with radiation sources can be very high indeed: millions of dollars. If the Agency can help to prevent even one such single accident, the cost of its whole programme in this field would be more than covered. Refs

  16. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  17. Radiation problems expected for the German spallation neutron source

    International Nuclear Information System (INIS)

    Goebel, K.

    1981-01-01

    The German project for the construction of a Spallation Neutron Source with high proton beam power (5.5 MW) will have to cope with a number of radiation problems. The present report describes these problems and proposes solutions for keeping exposures for the staff and release of activity and radiation into the environment as low as reasonably achievable. It is shown that the strict requirements of the German radiation protection regulations can be met. The main problem will be the exposure of maintenance personnel to remanent gamma radiation, as is the case at existing proton accelerators. Closed ventilation and cooling systems will reduce the release of (mainly short-lived) activity to acceptable levels. Shielding requirements for different sections are discussed, and it is demonstrated by calculations and extrapolations from experiments that fence-post doses well below 150 mrem/y can be obtained at distances of the order of 100 metres from the principal source points. The radiation protection system proposed for the Spallation Neutron Source is discussed, in particular the needs for monitor systems and a central radiation protection data base and alarm system. (orig.)

  18. Blind source separation problem in GPS time series

    Science.gov (United States)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2016-04-01

    A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition

  19. The inverse problems of reconstruction in the X-rays, gamma or positron tomographic imaging systems

    International Nuclear Information System (INIS)

    Grangeat, P.

    1999-01-01

    The revolution in imagery, brought by the tomographic technic in the years 70, allows the computation of local values cartography for the attenuation or the emission activity. The reconstruction techniques thus allow the connection from integral measurements to characteristic information distribution by inversion of the measurement equations. They are a main application of the solution technic for inverse problems. In a first part the author recalls the physical principles for measures in X-rays, gamma and positron imaging. Then he presents the various problems with their associated inversion techniques. The third part is devoted to the activity sector and examples, to conclude in the last part with the forecast. (A.L.B.)

  20. Overview of image reconstruction

    International Nuclear Information System (INIS)

    Marr, R.B.

    1980-04-01

    Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on R/sup n/ is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references

  1. Iterative methods for tomography problems: implementation to a cross-well tomography problem

    Science.gov (United States)

    Karadeniz, M. F.; Weber, G. W.

    2018-01-01

    The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.

  2. A coarse-mesh diffusion synthetic acceleration of the scattering source iteration scheme for one-speed slab-geometry discrete ordinates problems

    International Nuclear Information System (INIS)

    Santos, Frederico P.; Alves Filho, Hermes; Barros, Ricardo C.; Xavier, Vinicius S.

    2011-01-01

    The scattering source iterative (SI) scheme is traditionally applied to converge fine-mesh numerical solutions to fixed-source discrete ordinates (S N ) neutron transport problems. The SI scheme is very simple to implement under a computational viewpoint. However, the SI scheme may show very slow convergence rate, mainly for diffusive media (low absorption) with several mean free paths in extent. In this work we describe an acceleration technique based on an improved initial guess for the scattering source distribution within the slab. In other words, we use as initial guess for the fine-mesh scattering source, the coarse-mesh solution of the neutron diffusion equation with special boundary conditions to account for the classical S N prescribed boundary conditions, including vacuum boundary conditions. Therefore, we first implement a spectral nodal method that generates coarse-mesh diffusion solution that is completely free from spatial truncation errors, then we reconstruct this coarse-mesh solution within each spatial cell of the discretization grid, to further yield the initial guess for the fine-mesh scattering source in the first S N transport sweep (μm > 0 and μm < 0, m = 1:N) across the spatial grid. We consider a number of numerical experiments to illustrate the efficiency of the offered diffusion synthetic acceleration (DSA) technique. (author)

  3. A tensor-based dictionary learning approach to tomographic image reconstruction

    DEFF Research Database (Denmark)

    Soltani, Sara; Kilmer, Misha E.; Hansen, Per Christian

    2016-01-01

    We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion...... coefficients in that dictionary. Our approach differs from past approaches in that (a) we use a third-order tensor representation for our images and (b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem...... with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images...

  4. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  5. Shredded banknotes reconstruction using AKAZE points.

    Science.gov (United States)

    Nabiyev, Vasif V; Yılmaz, Seçkin; Günay, Asuman; Muzaffer, Gül; Ulutaş, Güzin

    2017-09-01

    Shredded banknote reconstruction is a recent topic and can be viewed as solving large-scale jigsaw puzzles. Also, problems such as reconstruction of fragmented documents, photographs and historical artefacts are closely related with this topic. The high computational complexity of these problems increases the need for the development of new methods Reconstruction of shredded banknotes consists of three main stages. (1) Matching fragments with a reference banknote. (2) Aligning the fragments by rotating at certain angles. (3) Assembling the fragments. The existing methods can successfully applied to synthetic banknote fragments which are created in computer environment. But when real banknote reconstruction problem is considered, different sub problems arise and make the existing methods inadequate. In this study, a keypoint based method, named AKAZE, was used to make the matching process effective. This is the first study that uses the AKAZE method in the reconstruction of shredded banknotes. A new method for fragment alignment has also been proposed. In this method, the convex hulls that contain all true matched AKAZE keypoints were found on reference banknote and fragments. The orientations of fragments were estimated accurately by comparing these convex polygons. Also, a new criterion was developed to reveal the success rates of reconstructed banknotes. In addition, two different data sets including real and synthetic banknote fragments of different countries were created to test the success of proposed method. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  7. Coordinate reconstruction using box reconstruction and projection of X-ray photo

    International Nuclear Information System (INIS)

    Achmad Suntoro

    2011-01-01

    Some mathematical formula have been derived for a process of reconstruction to define the coordinate of any point relative to a pre set coordinate system. The process of reconstruction uses a reconstruction box in which each edge's length of the box is known, each top-bottom face and left-right face of the box having a cross marker, and the top face and the right face of the box as plane projections by X-ray source in perspective projection -system. Using the data of the two X-ray projection images, any point inside the reconstruction box, as long as its projection is recorded in the two photos, will be determined its coordinate relative to the midpoint of the reconstruction box as the central point coordinates. (author)

  8. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    Science.gov (United States)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  9. Mastectomy Skin Necrosis After Breast Reconstruction: A Comparative Analysis Between Autologous Reconstruction and Implant-Based Reconstruction.

    Science.gov (United States)

    Sue, Gloria R; Lee, Gordon K

    2018-05-01

    Mastectomy skin necrosis is a significant problem after breast reconstruction. We sought to perform a comparative analysis on this complication between patients undergoing autologous breast reconstruction and patients undergoing 2-stage expander implant breast reconstruction. A retrospective review was performed on consecutive patients undergoing autologous breast reconstruction or 2-stage expander implant breast reconstruction by the senior author from 2006 through 2015. Patient demographic factors including age, body mass index, history of diabetes, history of smoking, and history of radiation to the breast were collected. Our primary outcome measure was mastectomy skin necrosis. Fisher exact test was used for statistical analysis between the 2 patient cohorts. The treatment patterns of mastectomy skin necrosis were then analyzed. We identified 204 patients who underwent autologous breast reconstruction and 293 patients who underwent 2-stage expander implant breast reconstruction. Patients undergoing autologous breast reconstruction were older, heavier, more likely to have diabetes, and more likely to have had prior radiation to the breast compared with patients undergoing implant-based reconstruction. The incidence of mastectomy skin necrosis was 30.4% of patients in the autologous group compared with only 10.6% of patients in the tissue expander group (P care in the autologous group, only 3.2% were treated with local wound care in the tissue expander group (P skin necrosis is significantly more likely to occur after autologous breast reconstruction compared with 2-stage expander implant-based breast reconstruction. Patients with autologous reconstructions are more readily treated with local wound care compared with patients with tissue expanders, who tended to require operative treatment of this complication. Patients considering breast reconstruction should be counseled appropriately regarding the differences in incidence and management of mastectomy skin

  10. Use of the 3D surgical modelling technique with open-source software for mandibular fibula free flap reconstruction and its surgical guides.

    Science.gov (United States)

    Ganry, L; Hersant, B; Quilichini, J; Leyder, P; Meningaud, J P

    2017-06-01

    Tridimensional (3D) surgical modelling is a necessary step to create 3D-printed surgical tools, and expensive professional software is generally needed. Open-source software are functional, reliable, updated, may be downloaded for free and used to produce 3D models. Few surgical teams have used free solutions for mastering 3D surgical modelling for reconstructive surgery with osseous free flaps. We described an Open-source software 3D surgical modelling protocol to perform a fast and nearly free mandibular reconstruction with microvascular fibula free flap and its surgical guides, with no need for engineering support. Four successive specialised Open-source software were used to perform our 3D modelling: OsiriX ® , Meshlab ® , Netfabb ® and Blender ® . Digital Imaging and Communications in Medicine (DICOM) data on patient skull and fibula, obtained with a computerised tomography (CT) scan, were needed. The 3D modelling of the reconstructed mandible and its surgical guides were created. This new strategy may improve surgical management in Oral and Craniomaxillofacial surgery. Further clinical studies are needed to demonstrate the feasibility, reproducibility, transfer of know how and benefits of this technique. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  11. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    Science.gov (United States)

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  12. Neural network algorithm for image reconstruction using the grid friendly projections

    International Nuclear Information System (INIS)

    Cierniak, R.

    2011-01-01

    Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)

  13. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2 −L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  14. Great Problems of Mathematics: A Course Based on Original Sources.

    Science.gov (United States)

    Laubenbacher, Reinhard C.; Pengelley, David J.

    1992-01-01

    Describes the history of five selected problems from mathematics that are included in an undergraduate honors course designed to utilize original sources for demonstrating the evolution of ideas developed in solving these problems: area and the definite integral, the beginnings of set theory, solutions of algebraic equations, Fermat's last…

  15. SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng, J; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China)

    2015-06-15

    Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of optical coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the

  16. An inverse problem for a one-dimensional time-fractional diffusion problem

    KAUST Repository

    Jin, Bangti

    2012-06-26

    We study an inverse problem of recovering a spatially varying potential term in a one-dimensional time-fractional diffusion equation from the flux measurements taken at a single fixed time corresponding to a given set of input sources. The unique identifiability of the potential is shown for two cases, i.e. the flux at one end and the net flux, provided that the set of input sources forms a complete basis in L 2(0, 1). An algorithm of the quasi-Newton type is proposed for the efficient and accurate reconstruction of the coefficient from finite data, and the injectivity of the Jacobian is discussed. Numerical results for both exact and noisy data are presented. © 2012 IOP Publishing Ltd.

  17. The present problems of hygienic supervising of workplaces with ionizing radiation sources

    International Nuclear Information System (INIS)

    Husar, J.

    1995-01-01

    This paper deals with the problems of hygienic supervising of workplaces with ionizing radiation sources in Bratislava. Most problems consist in present economic transformation of State Corporations. Abolishing of previous State Corporations and arising of new organizations means new field of their activities. It often happens, that previously used ionizing radiation sources, X-ray tubes, or radioactive sources, are not longer to use and it is necessary to remove corresponding workplaces.The big State Corporation with series of subsidiaries in whole Slovakia was divided to many new smaller Joint-stock Corporations. A subsidiary possessed workplace with X-ray tube and sealed radioactive source of medium radioactivity. During a routine hygienic inspection was found, that the original establishment was abolished, all personal dismissed and another organization is going to move at this place. New organization personnel has not known,that the previous workplace was such one with radiation sources. The situation was complicated by the fact, that new management had no connection to previous personnel and had not sufficient information about abolished establishment. The problem of supervising workplaces with ionizing radiation sources is described. (J.K.)

  18. The present problems of hygienic supervising of workplaces with ionizing radiation sources

    Energy Technology Data Exchange (ETDEWEB)

    Husar, J [State Health Institute of Slovak Republic, Bratislava (Czech Republic)

    1996-12-31

    This paper deals with the problems of hygienic supervising of workplaces with ionizing radiation sources in Bratislava. Most problems consist in present economic transformation of State Corporations. Abolishing of previous State Corporations and arising of new organizations means new field of their activities. It often happens, that previously used ionizing radiation sources, X-ray tubes, or radioactive sources, are not longer to use and it is necessary to remove corresponding workplaces.The big State Corporation with series of subsidiaries in whole Slovakia was divided to many new smaller Joint-stock Corporations. A subsidiary possessed workplace with X-ray tube and sealed radioactive source of medium radioactivity. During a routine hygienic inspection was found, that the original establishment was abolished, all personal dismissed and another organization is going to move at this place. New organization personnel has not known,that the previous workplace was such one with radiation sources. The situation was complicated by the fact, that new management had no connection to previous personnel and had not sufficient information about abolished establishment. The problem of supervising workplaces with ionizing radiation sources is described. (J.K.).

  19. Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.

    Science.gov (United States)

    Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael

    2017-08-22

    We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.

  20. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  1. Bioelectromagnetic forward problem: isolated source approach revis(it)ed.

    Science.gov (United States)

    Stenroos, M; Sarvas, J

    2012-06-07

    Electro- and magnetoencephalography (EEG and MEG) are non-invasive modalities for studying the electrical activity of the brain by measuring voltages on the scalp and magnetic fields outside the head. In the forward problem of EEG and MEG, the relationship between the neural sources and resulting signals is characterized using electromagnetic field theory. This forward problem is commonly solved with the boundary-element method (BEM). The EEG forward problem is numerically challenging due to the low relative conductivity of the skull. In this work, we revise the isolated source approach (ISA) that enables the accurate, computationally efficient BEM solution of this problem. The ISA is formulated for generic basis and weight functions that enable the use of Galerkin weighting. The implementation of the ISA-formulated linear Galerkin BEM (LGISA) is first verified in spherical geometry. Then, the LGISA is compared with conventional Galerkin and symmetric BEM approaches in a realistic 3-shell EEG/MEG model. The results show that the LGISA is a state-of-the-art method for EEG/MEG forward modeling: the ISA formulation increases the accuracy and decreases the computational load. Contrary to some earlier studies, the results show that the ISA increases the accuracy also in the computation of magnetic fields.

  2. Recovering the source and initial value simultaneously in a parabolic equation

    International Nuclear Information System (INIS)

    Zheng, Guang-Hui; Wei, Ting

    2014-01-01

    In this paper, we consider an inverse problem to simultaneously reconstruct the source term and initial data associated with a parabolic equation based on the additional temperature data at a terminal time t = T and the temperature data on an accessible part of a boundary. The conditional stability and uniqueness of the inverse problem are established. We apply a variational regularization method to recover the source and initial value. The existence, uniqueness and stability of the minimizer of the corresponding variational problem are obtained. Taking the minimizer as a regularized solution for the inverse problem, under an a priori and an a posteriori parameter choice rule, the convergence rates of the regularized solution under a source condition are also given. Furthermore, the source condition is characterized by an optimal control approach. Finally, we use a conjugate gradient method and a stopping criterion given by Morozov's discrepancy principle to solve the variational problem. Numerical experiments are provided to demonstrate the feasibility of the method. (papers)

  3. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...

  4. Sources and methods to reconstruct past masting patterns in European oak species.

    Science.gov (United States)

    Szabó, Péter

    2012-01-01

    The irregular occurrence of good seed years in forest trees is known in many parts of the world. Mast year frequency in the past few decades can be examined through field observational studies; however, masting patterns in the more distant past are equally important in gaining a better understanding of long-term forest ecology. Past masting patterns can be studied through the examination of historical written sources. These pose considerable challenges, because data in them were usually not recorded with the aim of providing information about masting. Several studies examined masting in the deeper past, however, authors hardly ever considered the methodological implications of using and combining various source types. This paper provides a critical overview of the types of archival written that are available for the reconstruction of past masting patterns for European oak species and proposes a method to unify and evaluate different types of data. Available sources cover approximately eight centuries and can be put into two basic categories: direct observations on the amount of acorns and references to sums of money received in exchange for access to acorns. Because archival sources are highly different in origin and quality, the optimal solution for creating databases for past masting data is a three-point scale: zero mast, moderate mast, good mast. When larger amounts of data are available in a unified three-point-scale database, they can be used to test hypotheses about past masting frequencies, the driving forces of masting or regional masting patterns.

  5. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  6. SESAME: a software tool for the numerical dosimetric reconstruction of radiological accidents involving external sources and its application to the accident in Chile in December 2005.

    Science.gov (United States)

    Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F

    2009-01-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.

  7. Three-dimension reconstruction based on spatial light modulator

    International Nuclear Information System (INIS)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu

    2011-01-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  8. Three-dimension reconstruction based on spatial light modulator

    Science.gov (United States)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  9. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  10. Separation of radiated sound field components from waves scattered by a source under non-anechoic conditions

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn

    2010-01-01

    to the source. Thus the radiated free-field component is estimated simultaneously with solving the inverse problem of reconstructing the sound field near the source. The method is particularly suited to cases in which the overall contribution of reflected sound in the measurement plane is significant....

  11. Intrinsic functional brain mapping in reconstructed 4D magnetic susceptibility (χ) data space.

    Science.gov (United States)

    Chen, Zikuan; Calhoun, Vince

    2015-02-15

    By solving an inverse problem of T2*-weighted magnetic resonance imaging for a dynamic fMRI study, we reconstruct a 4D magnetic susceptibility source (χ) data space for intrinsic functional mapping. A 4D phase dataset is calculated from a 4D complex fMRI dataset. The background field and phase wrapping effect are removed by a Laplacian technique. A 3D χ source map is reconstructed from a 3D phase image by a computed inverse MRI (CIMRI) scheme. A 4D χ data space is reconstructed by repeating the 3D χ source reconstruction for each time point. A functional map is calculated by a temporal correlation between voxel signals in the 4D χ space and the timecourse of the task paradigm. With a finger-tapping experiment, we obtain two 3D functional mappings in the 4D magnitude data space and in the reconstructed 4D χ data space. We find that the χ-based functional mapping reveals co-occurrence of bidirectional responses in a 3D activation map that is different from the conventional magnitude-based mapping. The χ-based functional mapping can also be achieved by a 3D deconvolution of a phase activation map. Based on a subject experimental comparison, we show that the 4D χ tomography method could produce a similar χ activation map as obtained by the 3D deconvolution method. By removing the dipole effect and other fMRI technological contaminations, 4D χ tomography provides a 4D χ data space that allows a more direct and truthful functional mapping of a brain activity. Published by Elsevier B.V.

  12. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  13. Reconstruction of extended sources for the Helmholtz equation

    KAUST Repository

    Kress, Rainer

    2013-02-26

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Our underlying model is that of inverse acoustic scattering based on the Helmholtz equation. Our inclusions are interior forces with compact support and our data consist of a single measurement of near-field Cauchy data on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler \\'equivalent point source\\' problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2013 IOP Publishing Ltd.

  14. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  15. General problems associated with the control and safe use of radiation sources (199)

    International Nuclear Information System (INIS)

    Ahmed, J.U.

    1993-01-01

    There are problems at various levels in ensuring safety in the use of radiation sources. A relatively new problem that warrants international action is the smuggling of radioactive material across international borders. An international convention on the control and safe use of radiation sources is essential to provide a universally harmonized mechanism for ensuring safety

  16. Fan-beam filtered-backprojection reconstruction without backprojection weight

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Noo, Frederic; Hornegger, Joachim; Lauritsch, Guenter

    2007-01-01

    In this paper, we address the problem of two-dimensional image reconstruction from fan-beam data acquired along a full 2π scan. Conventional approaches that follow the filtered-backprojection (FBP) structure require a weighted backprojection with the weight depending on the point to be reconstructed and also on the source position; this weight appears only in the case of divergent beam geometries. Compared to reconstruction from parallel-beam data, the backprojection weight implies an increase in computational effort and is also thought to have some negative impacts on noise properties of the reconstructed images. We demonstrate here that direct FBP reconstruction from full-scan fan-beam data is possible with no backprojection weight. Using computer-simulated, realistic fan-beam data, we compared our novel FBP formula with no backprojection weight to the use of an FBP formula based on equal weighting of all data. Comparisons in terms of signal-to-noise ratio, spatial resolution and computational efficiency are presented. These studies show that the formula we suggest yields images with a reduced noise level, at almost identical spatial resolution. This effect increases quickly with the distance from the center of the field of view, from 0% at the center to 20% less noise at 20 cm, and to 40% less noise at 25 cm. Furthermore, the suggested method is computationally less demanding and reduces computation time with a gain that was found to vary between 12% and 43% on the computers used for evaluation

  17. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    Science.gov (United States)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  18. On the comparsion of the Spherical Wave Expansion-to-Plane Wave Expansion and the Sources Reconstruction Method for Antenna Diagnostics

    DEFF Research Database (Denmark)

    Alvarez, Yuri; Cappellin, Cecilia; Las-Heras, Fernando

    2008-01-01

    A comparison between two recently developed methods for antenna diagnostics is presented. On one hand, the Spherical Wave Expansion-to-Plane Wave Expansion (SWE-PWE), based on the relationship between spherical and planar wave modes. On the other hand, the Sources Reconstruction Method (SRM), based...

  19. AIR Tools - A MATLAB Package of Algebraic Iterative Reconstruction Techniques

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    This collection of MATLAB software contains implementations of several Algebraic Iterative Reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  20. Review on solving the forward problem in EEG source analysis

    Directory of Open Access Journals (Sweden)

    Vergult Anneleen

    2007-11-01

    Full Text Available Abstract Background The aim of electroencephalogram (EEG source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter. In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM, the finite element method (FEM and the finite difference method (FDM. In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative

  1. Haplotyping Problem, A Clustering Approach

    International Nuclear Information System (INIS)

    Eslahchi, Changiz; Sadeghi, Mehdi; Pezeshk, Hamid; Kargar, Mehdi; Poormohammadi, Hadi

    2007-01-01

    Construction of two haplotypes from a set of Single Nucleotide Polymorphism (SNP) fragments is called haplotype reconstruction problem. One of the most popular computational model for this problem is Minimum Error Correction (MEC). Since MEC is an NP-hard problem, here we propose a novel heuristic algorithm based on clustering analysis in data mining for haplotype reconstruction problem. Based on hamming distance and similarity between two fragments, our iterative algorithm produces two clusters of fragments; then, in each iteration, the algorithm assigns a fragment to one of the clusters. Our results suggest that the algorithm has less reconstruction error rate in comparison with other algorithms

  2. A neural network image reconstruction technique for electrical impedance tomography

    International Nuclear Information System (INIS)

    Adler, A.; Guardo, R.

    1994-01-01

    Reconstruction of Images in Electrical Impedance Tomography requires the solution of a nonlinear inverse problem on noisy data. This problem is typically ill-conditioned and requires either simplifying assumptions or regularization based on a priori knowledge. This paper presents a reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem. This inverse is adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training. Results show good conductivity reconstruction where measurement SNR is similar to the training conditions. The advantages of this method are its conceptual simplicity and ease of implementation, and the ability to control the compromise between the noise performance and resolution of the image reconstruction

  3. A New Method for Coronal Magnetic Field Reconstruction

    Science.gov (United States)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  4. Three-dimension reconstruction based on spatial light modulator

    Energy Technology Data Exchange (ETDEWEB)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu, E-mail: daisydelring@yahoo.com.cn [Huazhong University of Science and Technology (China)

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  5. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  6. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  7. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

    Energy Technology Data Exchange (ETDEWEB)

    Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1995-12-01

    This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

  8. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William

    2011-02-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  9. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William; Hanke, Martin

    2011-01-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  10. Direct and inverse source problems for a space fractional advection dispersion equation

    KAUST Repository

    Aldoghaither, Abeer; Laleg-Kirati, Taous-Meriem; Liu, Da Yan

    2016-01-01

    In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic

  11. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  12. The impact of heart rate on image quality and reconstruction timing of dual-source CT coronary angiography

    International Nuclear Information System (INIS)

    Wang Yining; Jin Zhengyu; Kong Lingyan; Zhang Zhuhua; Song Lan; Mu Wenbin; Wang Yun; Zhao Wenmin; Zhang Shuyang; Lin Songbai

    2008-01-01

    Objective: To evaluate the impact of patient's heart rate (HR) on coronary CT angiography (CTA) image quality (IQ) and reconstruction timing in dual-source CT (DSCT). Methods Ninety-five patients with suspicion of coronary artery disease were examined with a DSCT scanner (Somatom Definition, Siemens) using 32 x 0.6 mm collimation. All patients were divided three groups according to the heart rate (HR): group 1, HR ≤ 70 beats per minute (bpm), n=26; group 2, HR >70 bpm to ≤90 bpm, n=37; group 3, HR > 90 bpm, n=32. No beta-blockers were taken before CT scan. 50- 60 ml of nonionic contrast agent were injected with a rate of 5 ml/s. Images were reconstructed from 10% to 100% of the R-R interval using single-segment reconstruction. Two readers independently assessed IQ of all coronary, segments using a 3-point scale from excellent (1) to non-assessable (3) for coronary segments and the relationship between IQ and the HR. Results: Overall mean IQ score was 1.31 ± 0.55 for all patients with 1.08 ± 0.27 for group 1, 1.32 ± 0.58 for group 2 and 1.47 ± 0.61 for group 3. The IQ was better in the LAD than the RCA and LCX (P<0.01). Only 1.4% (19/1386) of coronary artery segments were considered non-assessable due to the motion artifacts. Optimal image quality of all coronary segments in 74 patients (77.9%) can be achieved with one reconstruction data set. The best IQ was predominately in diastole (88.5%) in group 1, while the best IQ was in systole (84.4%) in group 3. Conclusions: DSCT can achieve the optimal IQ with a wide range of HR using single-segment reconstruction. With the increasing of HR, the timing of data reconstruction for the best IQ shifts from mid-diastole to systole. (authors)

  13. Reconstruction and visualization of nanoparticle composites by transmission electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Canada T6G 2G7 (Canada); Lockwood, R. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Malac, M., E-mail: marek.malac@nrc-cnrc.gc.ca [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Canada T6G 2G7 (Canada); Furukawa, H. [SYSTEM IN FRONTIER INC., 2-8-3, Shinsuzuharu bldg. 4F, Akebono-cho, Tachikawa-shi, Tokyo 190-0012 (Japan); Li, P.; Meldrum, A. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada)

    2012-02-15

    This paper examines the limits of transmission electron tomography reconstruction methods for a nanocomposite object composed of many closely packed nanoparticles. Two commonly used reconstruction methods in TEM tomography were examined and compared, and the sources of various artefacts were explored. Common visualization methods were investigated, and the resulting 'interpretation artefacts' ( i.e., deviations from 'actual' particle sizes and shapes arising from the visualization) were determined. Setting a known or estimated nanoparticle volume fraction as a criterion for thresholding does not in fact give a good visualization. Unexpected effects associated with common built-in image filtering methods were also found. Ultimately, this work set out to establish the common problems and pitfalls associated with electron beam tomographic reconstruction and visualization of samples consisting of closely spaced nanoparticles. -- Highlights: Black-Right-Pointing-Pointer Electron tomography limits were explored by both experiment and simulation. Black-Right-Pointing-Pointer Reliable quantitative volumetry using electron tomography is not presently feasible. Black-Right-Pointing-Pointer Volume rendering appears to be better choice for visualization of composite samples.

  14. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    Science.gov (United States)

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  15. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    Science.gov (United States)

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  16. Low dose reconstruction algorithm for differential phase contrast imaging.

    Science.gov (United States)

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  17. Generalized Fourier slice theorem for cone-beam image reconstruction.

    Science.gov (United States)

    Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang

    2015-01-01

    The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).

  18. Public opinion confronted by the safety problems associated with different energy source

    Energy Technology Data Exchange (ETDEWEB)

    Otway, H J; Thomas, K

    1978-09-01

    Model study of public opinion 'for' and 'against' the various energy sources - oil, coal, solar and nuclear power. Attitudes are examined from four aspects: psychology - economic advantages, sociopolitical problems, environmental problems and safety. The investigation focuses on nuclear energy. (13 refs.) (In French)

  19. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  20. On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs

    KAUST Repository

    Karve, Pranav M.; Kucukcoban, Sezgin; Kallivokas, Loukas F.

    2014-01-01

    to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem

  1. Greedy algorithms for diffuse optical tomography reconstruction

    Science.gov (United States)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of

  2. Point sources and multipoles in inverse scattering theory

    CERN Document Server

    Potthast, Roland

    2001-01-01

    Over the last twenty years, the growing availability of computing power has had an enormous impact on the classical fields of direct and inverse scattering. The study of inverse scattering, in particular, has developed rapidly with the ability to perform computational simulations of scattering processes and led to remarkable advances in a range of applications, from medical imaging and radar to remote sensing and seismic exploration. Point Sources and Multipoles in Inverse Scattering Theory provides a survey of recent developments in inverse acoustic and electromagnetic scattering theory. Focusing on methods developed over the last six years by Colton, Kirsch, and the author, this treatment uses point sources combined with several far-reaching techniques to obtain qualitative reconstruction methods. The author addresses questions of uniqueness, stability, and reconstructions for both two-and three-dimensional problems.With interest in extracting information about an object through scattered waves at an all-ti...

  3. Reconstruction of driving forces through recurrence plots

    International Nuclear Information System (INIS)

    Tanio, Masaaki; Hirata, Yoshito; Suzuki, Hideyuki

    2009-01-01

    We consider the problem of reconstructing one-dimensional driving forces only from the observations of driven systems. We extend the approach presented in a seminal paper [M.C. Casdagli, Physica D 108 (1997) 12] and propose a method that is robust and has wider applicability. By reinterpreting the work of Thiel et al. [M. Thiel, M.C. Romano, J. Kurths, Phys. Lett. A 330 (2004) 343], we formulate the reconstruction problem as a combinatorial optimization problem and relax conditions by assuming that a driving force is continuous. The method is demonstrated by using a tent map driven by an external force.

  4. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas...... are treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  5. INLINING 3D RECONSTRUCTION, MULTI-SOURCE TEXTURE MAPPING AND SEMANTIC ANALYSIS USING OBLIQUE AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    D. Frommholz

    2016-06-01

    Full Text Available This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM. The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric

  6. Environmental dose reconstruction: Approaches to an inexact science

    International Nuclear Information System (INIS)

    Hoffman, F.O.

    1991-01-01

    The endpoints of environmental dose reconstruction are quantitative yet the science is inexact. Four problems related to this issue are described. These problems are: (1) Defining the scope of the assessment and setting logical priorities for detailed investigations, (2) Recognizing the influence of investigator judgment of the results, (3) Selecting an endpoint other than dose for the assessment of multiple contaminants, and (4) Resolving the conflict between credibility and expertise in selecting individuals responsible for dose reconstruction. Approaches are recommended for dealing with each of these problems

  7. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  8. Source-plane reconstruction of the giant gravitational arc in A2667: A candidate Wolf-Rayet galaxy at z ∼ 1

    International Nuclear Information System (INIS)

    Cao, Shuo; Zhu, Zong-Hong; Federico II, Via Cinthia, I-80126 Napoli (Italy))" data-affiliation=" (Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Via Cinthia, I-80126 Napoli (Italy))" >Covone, Giovanni; Jullo, Eric; Richard, Johan; Izzo, Luca

    2015-01-01

    We present a new analysis of Hubble Space Telescope, Spitzer Space Telescope, and Very Large Telescope imaging and spectroscopic data of a bright lensed galaxy at z = 1.0334 in the lensing cluster A2667. Using this high-resolution imaging, we present an updated lens model that allows us to fully understand the lensing geometry and reconstruct the lensed galaxy in the source plane. This giant arc gives a unique opportunity to view the structure of a high-redshift disk galaxy. We find that the lensed galaxy of A2667 is a typical spiral galaxy with a morphology similar to the structure of its counterparts at higher redshift, z ∼ 2. The surface brightness of the reconstructed source galaxy in the z 850 band reveals the central surface brightness I(0) = 20.28 ± 0.22 mag arcsec –2 and a characteristic radius r s = 2.01 ± 0.16 kpc at redshift z ∼ 1. The morphological reconstruction in different bands shows obvious negative radial color gradients for this galaxy. Moreover, the redder central bulge tends to contain a metal-rich stellar population, rather than being heavily reddened by dust due to high and patchy obscuration. We analyze the VIMOS/integral field unit spectroscopic data and find that, in the given wavelength range (∼1800-3200 Å), the combined arc spectrum of the source galaxy is characterized by a strong continuum emission with strong UV absorption lines (Fe II and Mg II) and shows the features of a typical starburst Wolf-Rayet galaxy, NGC 5253. More specifically, we have measured the equivalent widths of Fe II and Mg II lines in the A2667 spectrum, and obtained similar values for the same wavelength interval of the NGC 5253 spectrum. Marginal evidence for [C III] 1909 emission at the edge of the grism range further confirms our expectation.

  9. Group-decoupled multi-group pin power reconstruction utilizing nodal solution 1D flux profiles

    International Nuclear Information System (INIS)

    Yu, Lulin; Lu, Dong; Zhang, Shaohong; Wang, Dezhong

    2014-01-01

    Highlights: • A direct fitting multi-group pin power reconstruction method is developed. • The 1D nodal solution flux profiles are used as the condition. • The least square fit problem is analytically solved. • A slowing down source improvement method is applied. • The method shows good accuracy for even challenging problems. - Abstract: A group-decoupled direct fitting method is developed for multi-group pin power reconstruction, which avoids both the complication of obtaining 2D analytic multi-group flux solution and any group-coupled iteration. A unique feature of the method is that in addition to nodal volume and surface average fluxes and corner fluxes, transversely-integrated 1D nodal solution flux profiles are also used as the condition to determine the 2D intra-nodal flux distribution. For each energy group, a two-dimensional expansion with a nine-term polynomial and eight hyperbolic functions is used to perform a constrained least square fit to the 1D intra-nodal flux solution profiles. The constraints are on the conservation of nodal volume and surface average fluxes and corner fluxes. Instead of solving the constrained least square fit problem numerically, we solve it analytically by fully utilizing the symmetry property of the expansion functions. Each of the 17 unknown expansion coefficients is expressed in terms of nodal volume and surface average fluxes, corner fluxes and transversely-integrated flux values. To determine the unknown corner fluxes, a set of linear algebraic equations involving corner fluxes is established via using the current conservation condition on all corners. Moreover, an optional slowing down source improvement method is also developed to further enhance the accuracy of the reconstructed flux distribution if needed. Two test examples are shown with very good results. One is a four-group BWR mini-core problem with all control blades inserted and the other is the seven-group OECD NEA MOX benchmark, C5G7

  10. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    Science.gov (United States)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  11. Clinical applications of iterative reconstruction

    International Nuclear Information System (INIS)

    Eberl, S.

    1998-01-01

    Expectation maximisation (EM) reconstruction largely eliminates the hot and cold streaking artifacts characteristic of filtered-back projection (FBP) reconstruction around localised hot areas, such as the bladder. It also substantially reduces the problem of decreased inferior wall counts in MIBI myocardial perfusion studies due to ''streaking'' from high liver uptake. Non-uniform attenuation and scatter correction, resolution recovery, anatomical information, e.g. from MRI or CT tracer kinetic modelling, can all be built into the EM reconstruction imaging model. The properties of ordered subset EM (OSEM) have also been used to correct for known patient motion as part of the reconstruction process. These uses of EM are elaborated more fully in some of the other abstracts of this meeting. Currently we use OSEM routinely for: (i) studies where streaking is a problem, including all MIBI myocardial perfusion studies, to avoid hot liver inferior wall artifact, (ii) all whole body FDG PET, all lung V/Q SPECT (which have a short acquisition time) and all gated 201 TI myocardial perfusion studies due to improved noise characteristics of OSEM in these studies; (iii) studies with measured, non-uniform attenuation correction. With the accelerated OSEM algorithm, iterative reconstruction is practical for routine clinical applications and we have found OSEM to provide clearly superior reconstructions for the areas listed above and are investigating its application to other studies. In clinical use, we have not found OSEM to introduce artifacts which would not also occur with FBP, e.g. uncorrected patient motion will cause artifacts with both OSEM and FBP

  12. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  13. Early anterior cruciate ligament reconstruction can save meniscus without any complications

    Directory of Open Access Journals (Sweden)

    Chang-Ik Hur

    2017-01-01

    Conclusions: Early ACL reconstruction had excellent clinical results and stability as good as delayed reconstruction without the problem of knee motion, muscle power, and postural control. Moreover, early reconstruction showed the high possibility of meniscal repair. Therefore, early ACL reconstruction should be recommended.

  14. Direct cone beam SPECT reconstruction with camera tilt

    International Nuclear Information System (INIS)

    Jianying Li; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.; Zongjian Cao; Tsui, B.M.W.

    1993-01-01

    A filtered backprojection (FBP) algorithm is derived to perform cone beam (CB) single-photon emission computed tomography (SPECT) reconstruction with camera tilt using circular orbits. This algorithm reconstructs the tilted angle CB projection data directly by incorporating the tilt angle into it. When the tilt angle becomes zero, this algorithm reduces to that of Feldkamp. Experimentally acquired phantom studies using both a two-point source and the three-dimensional Hoffman brain phantom have been performed. The transaxial tilted cone beam brain images and profiles obtained using the new algorithm are compared with those without camera tilt. For those slices which have approximately the same distance from the detector in both tilt and non-tilt set-ups, the two transaxial reconstructions have similar profiles. The two-point source images reconstructed from this new algorithm and the tilted cone beam brain images are also compared with those reconstructed from the existing tilted cone beam algorithm. (author)

  15. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.

    Science.gov (United States)

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.

  16. Single Phase Current-Source Active Rectifier for Traction: Control System Design and Practical Problems

    Directory of Open Access Journals (Sweden)

    Jan Michalik

    2006-01-01

    Full Text Available This research has been motivated by industrial demand for single phase current-source active rectifier dedicated for reconstruction of older types of dc machine locomotives. This paper presents converters control structure design and simulations. The proposed converter control is based on the mathematical model and due to possible interaction with railway signaling and required low switching frequency employs synchronous PWM. The simulation results are verified by experimental tests performed on designed laboratory prototype of power of 7kVA

  17. Apparatus and method for reconstructing data

    International Nuclear Information System (INIS)

    Pavkovich, J.M.

    1977-01-01

    The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)

  18. Environmental problems due to mining in Jharia Coalfield

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, S.P.

    1985-06-01

    The Jharia coalfield to the NW of Calcutta, India is the most important source of coking coal in the country. Coal mining started in 1890; in 1971-3 the coking coal mines were nationalised, the remainder being operated by the Bharat Coking Coal Ltd., TISCO and IISCO. Intensive mining has resulted in major environmental problems - land damage, water pollution, air pollution and overpopulation - which a major reconstruction programme started in 1976 hopes to solve. 2 references.

  19. Review on solving the inverse problem in EEG source analysis

    Directory of Open Access Journals (Sweden)

    Fabri Simon G

    2008-11-01

    Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF

  20. Bayesian image reconstruction in SPECT using higher order mechanical models as priors

    International Nuclear Information System (INIS)

    Lee, S.J.; Gindi, G.; Rangarajan, A.

    1995-01-01

    While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques

  1. The problem of the architectural heritage reconstruction

    Directory of Open Access Journals (Sweden)

    Alfazhr M.A.

    2017-02-01

    Full Text Available the subject of this research is the modern technology of the architectural monuments restoration, which makes possible to increase the design and performance, as well as the durability of historical objects. Choosing the most efficient, cost-effective and durable recovery and expanding of architectural monuments technologies is a priority of historical cities. Adoption of the faster and sound monuments restoration technology is neсessay because there are a lot of historical Russian cities in need of repair and reconstruction. Therefore, it is essential that new renovation works improvement methods and technologies on the basis of the western experience in construction to be found.

  2. Simulation study of two-energy X-ray fluorescence holograms reconstruction algorithm to remove twin images

    International Nuclear Information System (INIS)

    Xie Honglan; Hu Wen; Luo Hongxin; Deng Biao; Du Guohao; Xue Yanling; Chen Rongchang; Shi Shaomeng; Xiao Tiqiao

    2008-01-01

    Unlike traditional outside-source holography, X-ray fluorescence holography is carded out with fluorescent atoms in a sample as source light for holographic imaging. With the method, three-dimensional arrangement of atoms into crystals can be observed obviously. However, just like traditional outside-source holography, X-ray fluorescence holography suffers from the inherent twin-image problem, too. With a 27-Fe-atoms cubic lattice as model, we discuss in this paper influence of the photon energy of incident source in removing twin images in reconstructed atomic images by numerical simulation and reconstruction with two-energy X-ray fluorescence holography. The results indicate that incident X-rays of nearer energies have better effect of removing twin images. In the detector of X-ray holography, minimum difference of the two incident energies depends on energy resolution of the monochromator and detector, and for inside source X-ray holography, minimum difference of the two incident energies depends on difference of two neighboring fluorescent energies emitting from the element and energy resolution of detector. The spatial resolution of atomic images increases with the incident energies. This is important for experiments of X-ray fluorescence holography, which is being developed on Shanghai Synchrotron Radiation Facility. (authors)

  3. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  4. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  5. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  6. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  7. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    Science.gov (United States)

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. PolyFit: Polygonal Surface Reconstruction from Point Clouds

    KAUST Repository

    Nan, Liangliang; Wonka, Peter

    2017-01-01

    We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.

  9. PolyFit: Polygonal Surface Reconstruction from Point Clouds

    KAUST Repository

    Nan, Liangliang

    2017-12-25

    We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.

  10. Phase retrieval problems in X-ray physics. From modeling to efficient algorithms

    International Nuclear Information System (INIS)

    Homann, Carolin

    2015-01-01

    In phase retrieval problems that occur in imaging by coherent X-ray diffraction, one tries to reconstruct information about a sample of interest from possibly noisy intensity measurements of the wave field traversing the sample. The mathematical formulation of these problems bases on some assumptions. Usually one of them is that the X-ray wave field is generated by a point source. In order to address this very idealized assumption, it is common to perform a data preprocessing step, the so-called empty beam correction. Within this work, we study the validity of this approach by presenting a quantitative error estimate. Moreover, in order to solve these phase retrieval problems, we want to incorporate a priori knowledge about the structure of the noise and the solution into the reconstruction process. For this reason, the application of a problem adapted iteratively regularized Newton-type method becomes particularly attractive. This method includes the solution of a convex minimization problem in each iteration step. We present a method for solving general optimization problems of this form. Our method is a generalization of a commonly used algorithm which makes it efficiently applicable to a wide class of problems. We also proof convergence results and show the performance of our method by numerical examples.

  11. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    Science.gov (United States)

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used

  12. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT

    International Nuclear Information System (INIS)

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-01-01

    Purpose: The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. Methods: To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. Results: While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same

  13. Problems of accuracy and sources of error in trace analysis of elements

    International Nuclear Information System (INIS)

    Porat, Ze'ev.

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,

  14. Problems of accuracy and sources of error in trace analysis of elements

    Energy Technology Data Exchange (ETDEWEB)

    Porat, Ze` ev

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,.

  15. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. Maxillary reconstruction

    Directory of Open Access Journals (Sweden)

    Brown James

    2007-12-01

    Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.

  17. Reconstruction of reflectance data using an interpolation technique.

    Science.gov (United States)

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  18. Typology of historical sources and the reconstruction of long-term historical changes of riverine fish: a case study of the Austrian Danube and northern Russian rivers

    Science.gov (United States)

    Haidvogl, Gertrud; Lajus, Dmitry; Pont, Didier; Schmid, Martin; Jungwirth, Mathias; Lajus, Julia

    2014-01-01

    Historical data are widely used in river ecology to define reference conditions or to investigate the evolution of aquatic systems. Most studies rely on printed documents from the 19th century, thus missing pre-industrial states and human impacts. This article discusses historical sources that can be used to reconstruct the development of riverine fish communities from the Late Middle Ages until the mid-20th century. Based on the studies of the Austrian Danube and northern Russian rivers, we propose a classification scheme of printed and archival sources and describe their fish ecological contents. Five types of sources were identified using the origin of sources as the first criterion: (i) early scientific surveys, (ii) fishery sources, (iii) fish trading sources, (iv) fish consumption sources and (v) cultural representations of fish. Except for early scientific surveys, all these sources were produced within economic and administrative contexts. They did not aim to report about historical fish communities, but do contain information about commercial fish and their exploitation. All historical data need further analysis for a fish ecological interpretation. Three case studies from the investigated Austrian and Russian rivers demonstrate the use of different source types and underline the necessity for a combination of different sources and a methodology combining different disciplinary approaches. Using a large variety of historical sources to reconstruct the development of past fish ecological conditions can support future river management by going beyond the usual approach of static historical reference conditions. PMID:25284959

  19. Challenges in the reconstruction of bilateral maxillectomy defects.

    Science.gov (United States)

    Joseph, Shawn T; Thankappan, Krishnakumar; Buggaveeti, Rahul; Sharma, Mohit; Mathew, Jimmy; Iyer, Subramania

    2015-02-01

    Bilateral maxillectomy defects, if not adequately reconstructed, can result in grave esthetic and functional problems. The purpose of this study was to investigate the outcome of reconstruction of such defects. This is a retrospective case series. The defects were analyzed for their components and the flaps used for reconstruction. Outcomes for flap loss and functional indices, including oral diet, speech, and dental rehabilitation, also were evaluated. Ten consecutive patients who underwent bilateral maxillectomy reconstruction received 14 flaps. Six patients had malignancies of the maxilla, and 4 patients had nonmalignant indications. Ten bony free flaps were used. Four soft tissue flaps were used. The fibula free flap was the most common flap used. Three patients had total flap loss. Seven patients were alive and available for functional evaluation. Of these, 4 were taking an oral diet with altered consistency and 2 were on a regular diet. Speech was intelligible in all patients. Only 2 patients opted for dental rehabilitation with removable dentures. Reconstruction after bilateral maxillectomy is essential to prevent esthetic and functional problems. Bony reconstruction is ideal. The fibula bone free flap is commonly used. The complexity of the defect makes reconstruction difficult and the initial success rate of free flaps is low. Secondary reconstructions after the initial flap failures were successful. A satisfactory functional outcome can be achieved. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Photoacoustic image reconstruction: a quantitative analysis

    Science.gov (United States)

    Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.

    2007-07-01

    Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.

  1. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  2. On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs

    KAUST Repository

    Karve, Pranav M.

    2014-12-28

    © 2014, Springer International Publishing Switzerland. We discuss an optimization methodology for focusing wave energy to subterranean formations using strong motion actuators placed on the ground surface. The motivation stems from the desire to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem. The underlying wave propagation problems are abstracted in two spatial dimensions, and the semi-infinite extent of the physical domain is negotiated by a buffer of perfectly-matched-layers (PMLs) placed at the domain’s truncation boundary. We discuss two possible numerical implementations: Their utility for deciding the tempo-spatial characteristics of optimal wave sources is shown via numerical experiments. Overall, the simulations demonstrate the inverse source method’s ability to simultaneously optimize load locations and time signals leading to the maximization of energy delivery to a target formation.

  3. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  4. Computational modeling for the angular reconstruction of monoenergetic neutron flux in non-multiplying slabs using synthetic diffusion approximation

    International Nuclear Information System (INIS)

    Mansur, Ralph S.; Barros, Ricardo C.

    2011-01-01

    We describe a method to determine the neutron scalar flux in a slab using monoenergetic diffusion model. To achieve this goal we used three ingredients in the computational code that we developed on the Scilab platform: a spectral nodal method that generates numerical solution for the one-speed slab-geometry fixed source diffusion problem with no spatial truncation errors; a spatial reconstruction scheme to yield detailed profile of the coarse-mesh solution; and an angular reconstruction scheme to yield approximately the neutron angular flux profile at a given location of the slab migrating in a given direction. Numerical results are given to illustrate the efficiency of the offered code. (author)

  5. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    International Nuclear Information System (INIS)

    Virador, Patrick R.G.

    2000-01-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  6. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Virador, Patrick R.G. [Univ. of California, Berkeley, CA (United States)

    2000-04-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  7. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing.

    Science.gov (United States)

    Cohen, Michael X; Ridderinkhof, K Richard

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30-50 Hz), followed by a later alpha-band (8-12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4-8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions.

  8. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing.

    Directory of Open Access Journals (Sweden)

    Michael X Cohen

    Full Text Available Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus conflict modulation in stimulus-contralateral parietal gamma (30-50 Hz, followed by a later alpha-band (8-12 Hz conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4-8 Hz, alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions.

  9. EEG Source Reconstruction Reveals Frontal-Parietal Dynamics of Spatial Conflict Processing

    Science.gov (United States)

    Cohen, Michael X; Ridderinkhof, K. Richard

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30–50 Hz), followed by a later alpha-band (8–12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4–8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions. PMID:23451201

  10. Mathematical Problems in Synthetic Aperture Radar

    Science.gov (United States)

    Klein, Jens

    2010-10-01

    This thesis is concerned with problems related to Synthetic Aperture Radar (SAR). The thesis is structured as follows: The first chapter explains what SAR is, and the physical and mathematical background is illuminated. The following chapter points out a problem with a divergent integral in a common approach and proposes an improvement. Numerical comparisons are shown that indicate that the improvements allow for a superior image quality. Thereafter the problem of limited data is analyzed. In a realistic SAR-measurement the data gathered from the electromagnetic waves reflected from the surface can only be collected from a limited area. However the reconstruction formula requires data from an infinite distance. The chapter gives an analysis of the artifacts which can obscure the reconstructed images due to this problem. Additionally, some numerical examples are shown that point to the severity of the problem. In chapter 4 the fact that data is available only from a limited area is used to propose a new inversion formula. This inversion formula has the potential to make it easier to suppress artifacts due to limited data and, depending on the application, can be refined to a fast reconstruction formula. In the penultimate chapter a solution to the problem of left-right ambiguity is presented. This problem exists since the invention of SAR and is caused by the geometry of the measurements. This leads to the fact that only symmetric images can be obtained. With the solution from this chapter it is possible to reconstruct not only the even part of the reflectivity function, but also the odd part, thus making it possible to reconstruct asymmetric images. Numerical simulations are shown to demonstrate that this solution is not affected by stability problems as other approaches have been. The final chapter develops some continuative ideas that could be pursued in the future.

  11. The transverse musculocutaneous gracilis flap for breast reconstruction: guidelines for flap and patient selection.

    Science.gov (United States)

    Schoeller, Thomas; Huemer, Georg M; Wechselberger, Gottfried

    2008-07-01

    The transverse musculocutaneous gracilis (TMG) flap has received little attention in the literature as a valuable alternative source of donor tissue in the setting of breast reconstruction. The authors give an in-depth review of their experience with breast reconstruction using the TMG flap. A retrospective review of 111 patients treated with a TMG flap for breast reconstruction in an immediate or a delayed setting between August of 2002 and July of 2007 was undertaken. Of these, 26 patients underwent bilateral reconstruction and 68 underwent unilateral reconstruction, and 17 patients underwent reconstruction unilaterally with a double TMG flap. Patient age ranged between 24 and 65 years (mean, 37 years). Twelve patients had to be taken back to the operating room because of flap-related problems and nine patients underwent successful revision microsurgically, resulting in three complete flap losses in a series of 111 patients with 154 transplanted TMG flaps. Partial flap loss was encountered in two patients, whereas fat tissue necrosis was managed conservatively in six patients. Donor-site morbidity was an advantage of this flap, with a concealed scar and minimal contour irregularities of the thigh, even in unilateral harvest. Complications included delayed wound healing (n = 10), hematoma (n = 5), and transient sensory deficit over the posterior thigh (n = 49). The TMG flap is more than an alternative to the deep inferior epigastric perforator (DIEP) flap in microsurgical breast reconstruction in selected patients. In certain indications, such as bilateral reconstructions, it possibly surpasses the DIEP flap because of a better concealed donor scar and easier harvest.

  12. A clinical perspective of accelerated statistical reconstruction

    International Nuclear Information System (INIS)

    Hutton, B.F.; Hudson, H.M.; Beekman, F.J.

    1997-01-01

    Although the potential benefits of maximum likelihood reconstruction have been recognised for many years, the technique has only recently found widespread popularity in clinical practice. Factors which have contributed to the wider acceptance include improved models for the emission process, better understanding of the properties of the algorithm and, not least, the practicality of application with the development of acceleration schemes and the improved speed of computers. The objective in this article is to present a framework for applying maximum likelihood reconstruction for a wide range of clinically based problems. The article draws particularly on the experience of the three authors in applying an acceleration scheme involving use of ordered subsets to a range of applications. The potential advantages of statistical reconstruction techniques include: (a) the ability to better model the emission and detection process, in order to make the reconstruction converge to a quantitative image, (b) the inclusion of a statistical noise model which results in better noise characteristics, and (c) the possibility to incorporate prior knowledge about the distribution being imaged. The great flexibility in adapting the reconstruction for a specific model results in these techniques having wide applicability to problems in clinical nuclear medicine. (orig.). With 8 figs., 1 tab

  13. Investigating Efficiency of Time Domain Curve fitters Versus Filtering for Rectification of Displacement Histories Reconstructed from Acceleration Measurements

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Brincker, Rune

    2008-01-01

    Computing displacements of a structure from its measured accelerations has been major concern of some fields of engineering such as earthquake engineering. In vibration engineering also displacements are preferred to acceleration histories occasionally i.e. in the determination of forces applied...... on a structure. In brief the major problem that accompanies reconstruction of true displacement from acceleration record is the unreal drift observed in the double integrated acceleration. Purpose of the present work is to address source of the problem, introduce its treatments, show how they work and compare...

  14. Photon migration in non-scattering tissue and the effects on image reconstruction

    Science.gov (United States)

    Dehghani, H.; Delpy, D. T.; Arridge, S. R.

    1999-12-01

    Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.

  15. Photon migration in non-scattering tissue and the effects on image reconstruction

    International Nuclear Information System (INIS)

    Dehghani, H.; Delpy, D.T.; Arridge, S.R.

    1999-01-01

    Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation. (author)

  16. Probabilist methods applied to electric source problems in nuclear safety

    International Nuclear Information System (INIS)

    Carnino, A.; Llory, M.

    1979-01-01

    Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr

  17. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  18. Reconstruction of phylogenetic trees of prokaryotes using maximal common intervals.

    Science.gov (United States)

    Heydari, Mahdi; Marashi, Sayed-Amir; Tusserkani, Ruzbeh; Sadeghi, Mehdi

    2014-10-01

    One of the fundamental problems in bioinformatics is phylogenetic tree reconstruction, which can be used for classifying living organisms into different taxonomic clades. The classical approach to this problem is based on a marker such as 16S ribosomal RNA. Since evolutionary events like genomic rearrangements are not included in reconstructions of phylogenetic trees based on single genes, much effort has been made to find other characteristics for phylogenetic reconstruction in recent years. With the increasing availability of completely sequenced genomes, gene order can be considered as a new solution for this problem. In the present work, we applied maximal common intervals (MCIs) in two or more genomes to infer their distance and to reconstruct their evolutionary relationship. Additionally, measures based on uncommon segments (UCS's), i.e., those genomic segments which are not detected as part of any of the MCIs, are also used for phylogenetic tree reconstruction. We applied these two types of measures for reconstructing the phylogenetic tree of 63 prokaryotes with known COG (clusters of orthologous groups) families. Similarity between the MCI-based (resp. UCS-based) reconstructed phylogenetic trees and the phylogenetic tree obtained from NCBI taxonomy browser is as high as 93.1% (resp. 94.9%). We show that in the case of this diverse dataset of prokaryotes, tree reconstruction based on MCI and UCS outperforms most of the currently available methods based on gene orders, including breakpoint distance and DCJ. We additionally tested our new measures on a dataset of 13 closely-related bacteria from the genus Prochlorococcus. In this case, distances like rearrangement distance, breakpoint distance and DCJ proved to be useful, while our new measures are still appropriate for phylogenetic reconstruction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  20. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    Science.gov (United States)

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  1. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  2. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  3. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  4. Possibility of using sources of vacuum ultraviolet irradiation to solve problems of space material science

    Science.gov (United States)

    Verkhoutseva, E. T.; Yaremenko, E. I.

    1974-01-01

    An urgent problem in space materials science is simulating the interaction of vacuum ultraviolet (VUV) of solar emission with solids in space conditions, that is, producing a light source with a distribution that approximates the distribution of solar energy. Information is presented on the distribution of the energy flux of VUV of solar radiation. Requirements that must be satisfied by the VUV source used for space materials science are formulated, and a critical evaluation is given of the possibilities of using existing sources for space materials science. From this evaluation it was established that none of the sources of VUV satisfies the specific requirements imposed on the simulator of solar radiation. A solution to the problem was found to be in the development of a new type of source based on exciting a supersonic gas jet flowing into vacuum with a sense electron beam. A description of this gas-jet source, along with its spectral and operation characteristics, is presented.

  5. The status quo, problems and improvements pertaining to radiation source management in China

    International Nuclear Information System (INIS)

    Jin Jiaqi

    1998-01-01

    Early in 1930s, radiation sources were used in medicine in China, and since then their application has been widely extended in a variety of fields. This paper presents a brief outline of the status quo, problems on management for radiation sources, and some relevant improvements as recommended by author are also included in it. (author)

  6. Iterative reconstruction methods for Thermo-acoustic Tomography

    International Nuclear Information System (INIS)

    Marinesque, Sebastien

    2012-01-01

    We define, study and implement various iterative reconstruction methods for Thermo-acoustic Tomography (TAT): the Back and Forth Nudging (BFN), easy to implement and to use, a variational technique (VT) and the Back and Forth SEEK (BF-SEEK), more sophisticated, and a coupling method between Kalman filter (KF) and Time Reversal (TR). A unified formulation is explained for the sequential techniques aforementioned that defines a new class of inverse problem methods: the Back and Forth Filters (BFF). In addition to existence and uniqueness (particularly for backward solutions), we study many frameworks that ensure and characterize the convergence of the algorithms. Thus we give a general theoretical framework for which the BFN is a well-posed problem. Then, in application to TAT, existence and uniqueness of its solutions and geometrical convergence of the algorithm are proved, and an explicit convergence rate and a description of its numerical behaviour are given. Next, theoretical and numerical studies of more general and realistic framework are led, namely different objects, speeds (with or without trapping), various sensor configurations and samplings, attenuated equations or external sources. Then optimal control and best estimate tools are used to characterize the BFN convergence and converging feedbacks for BFF, under observability assumptions. Finally, we compare the most flexible and efficient current techniques (TR and an iterative variant) with our various BFF and the VT in several experiments. Thus, robust, with different possible complexities and flexible, the methods that we propose are very interesting reconstruction techniques, particularly in TAT and when observations are degraded. (author) [fr

  7. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  8. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    Science.gov (United States)

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  9. A multi-supplier sourcing problem with a preference ordering of suppliers

    NARCIS (Netherlands)

    Honhon, D.B.L.P.; Gaur, V.; Seshadri, S.

    2012-01-01

    We study a sourcing problem faced by a firm that seeks to procure a product or a component from a pool of alternative suppliers. The firm has a preference ordering of the suppliers based on factors such as their past performance, quality, service, geographical location, and financial strength, which

  10. Pan-sharpening via compressed superresolution reconstruction and multidictionary learning

    Science.gov (United States)

    Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang

    2018-01-01

    In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.

  11. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  12. MR image reconstruction via guided filter.

    Science.gov (United States)

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  13. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    Science.gov (United States)

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  14. 3D reconstruction software comparison for short sequences

    Science.gov (United States)

    Strupczewski, Adam; Czupryński, BłaŻej

    2014-11-01

    Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.

  15. Increasing efficiency of reconstruction and technological development of coking enterprises

    Energy Technology Data Exchange (ETDEWEB)

    Rozenfel' d, M.S.; Martynenko, V.M.; Tytyuk, Yu.A.; Ivanov, V.V.; Svyatogorov, A.A.; Kolomiets, A.F. (NIISP, Voroshilovgrad (USSR))

    1989-07-01

    Discusses problems associated with reconstruction of coking plants in the USSR. Planning coking plant reconstruction is analyzed. Duration of individual stages of plant reconstruction is considered. A method developed by the Giprokoks research institute for calculating reconstruction time considering duration of individual stages of coke oven battery repair is analyzed: construction of storage facilities, transport of materials and equipment, safety requirements, coke oven cooling, dismantling, construction of coke oven walls, installation of machines and equipment. Advantages of using the methods for analysis of coke oven battery reconstruction and optimization of repair time are discussed.

  16. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G. [Departamento de Ingenieria Quimica Y Nuclear, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain); Capilla, M.; Talavera, C. F.; Ginestar, D. [Dept. of Nuclear Engineering, Departamento de Matematica Aplicada, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain)

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  17. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    International Nuclear Information System (INIS)

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-01-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  18. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed

  19. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    Science.gov (United States)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  20. Source convergence problems in the application of burnup credit for WWER-440 fuel

    International Nuclear Information System (INIS)

    Hordosy, Gabor

    2003-01-01

    The problems in Monte Carlo criticality calculations caused by the slow convergence of the fission source are examined on an example. A spent fuel storage cask designed for WWER-440 fuel used a sample case. The influence of the main parameters of the calculations is investigated including the initial fission source. A possible strategy is proposed to overcome the difficulties associated by the slow source convergence. The advantage of the proposed strategy that it can be implemented using the standard MCNP features. (author)

  1. Failed medial patellofemoral ligament reconstruction: Causes and surgical strategies

    OpenAIRE

    Sanchis-Alfonso, Vicente; Montesinos-Berry, Erik; Ramirez-Fuentes, Cristina; Leal Blanquet, Joan; Gelber, Pablo-Eduardo; Monllau García, Juan Carlos

    2017-01-01

    Patellar instability is a common clinical problem encountered by orthopedic surgeons specializing in the knee. For patients with chronic lateral patellar instability, the standard surgical approach is to stabilize the patella through a medial patellofemoral ligament (MPFL) reconstruction. Foreseeably, an increasing number of revision surgeries of the reconstructed MPFL will be seen in upcoming years. In this paper, the causes of failed MPFL reconstruction are analyzed: (1) incorrect surgical ...

  2. Obtaining sparse distributions in 2D inverse problems

    OpenAIRE

    Reci, A; Sederman, Andrew John; Gladden, Lynn Faith

    2017-01-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxat...

  3. ℓ0 Gradient Minimization Based Image Reconstruction for Limited-Angle Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Wei Yu

    Full Text Available In medical and industrial applications of computed tomography (CT imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic. X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients. As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem. To solve the problem, image reconstructed by conventional filtered back projection (FBP algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges. Image reconstruction based on total variation minimization (TVM can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT. To suppress this kind of artifacts, we develop an image reconstruction algorithm based on ℓ0 gradient minimization for limited-angle CT in this paper. The ℓ0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model. We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration. Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm. From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR and normalized root mean square distance (NRMSD, it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<0.0001. From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed

  4. Simulation and track reconstruction for beam telescopes

    CERN Document Server

    Maqbool, Salman

    2017-01-01

    Beam telescopes are used for testing new detectors under development. Sensors are placed and a particle beam is passed through them. To test these novel detectors and determine their properties, the particle tracks need to be reconstructed from the known detectors in the telescope. Based on the reconstructed track, it’s predicted hits on the Device under Test (DUT) are compared with the actual hits on the DUT. Several methods exist for track reconstruction, but most of them don’t account for the effects of multiple scattering. General Broken Lines is one such algorithm which incorporates these effects during reconstruction. The aim of this project was to simulate the beam telescope and extend the track reconstruction framework for the FE-I4 telescope, which takes these effects into account. Section 1 introduces the problem, while section 2 focuses on beam telescopes. This is followed by the Allpix2 simulation framework in Section 3. And finally, Section 4 introduces the Proteus track reconstruction framew...

  5. Simulation and Track Reconstruction for Beam Telescopes

    CERN Document Server

    Maqbool, Salman

    2017-01-01

    Beam telescopes are an important tool to test new detectors under development in a particle beam. To test these novel detectors and determine their properties, the particle tracks need to be reconstructed from the known detectors in the telescope. Based on the reconstructed track, its predicted position on the Device under Test (DUT) are compared with the actual hits on the DUT. Several methods exist for track reconstruction, but most of them do not account for the effects of multiple scattering. General Broken Lines is one such algorithm which incorporates these effects during reconstruction. The aim of this project was to simulate the beam telescope and extend the track reconstruction framework for the FE-I4 telescope, which takes these effects into account. Section 1 introduces the problem, while section 2 focuses on beam telescopes. This is followed by the Allpix2 simulation framework in Section 3. And finally, Section 4 introduces the Proteus track reconstruction framework along with the General Broken ...

  6. The generalized back projection theorem for cone beam reconstruction

    International Nuclear Information System (INIS)

    Peyrin, F.C.

    1985-01-01

    The use of cone beam scanners raises the problem of three dimensional reconstruction from divergent projections. After a survey on bidimensional analytical reconstruction methods we examine their application to the 3D problem. Finally, it is shown that the back projection theorem can be generalized to cone beam projections. This allows to state a new inversion formula suitable for both the 4 π parallel and divergent geometries. It leads to the generalization of the ''rho-filtered back projection'' algorithm which is outlined

  7. A Survey of Urban Reconstruction

    KAUST Repository

    Musialski, P.

    2013-05-10

    This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  8. A Survey of Urban Reconstruction

    KAUST Repository

    Musialski, P.; Wonka, Peter; Aliaga, D. G.; Wimmer, M.; van Gool, L.; Purgathofer, W.

    2013-01-01

    This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  9. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  10. A Multi-Data Source and Multi-Sensor Approach for the 3D Reconstruction and Web Visualization of a Complex Archaelogical Site: The Case Study of “Tolmo De Minateda”

    Directory of Open Access Journals (Sweden)

    Jose Alberto Torres-Martínez

    2016-06-01

    Full Text Available The complexity of archaeological sites hinders creation of an integral model using the current Geomatic techniques (i.e., aerial, close-range photogrammetry and terrestrial laser scanner individually. A multi-sensor approach is therefore proposed as the optimal solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial datasets must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. The proposed multi-data source and multi-sensor approach is applied to the study case of the “Tolmo de Minateda” archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike, an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. Finally, a mobile device (e.g., tablet or smartphone has been used to integrate, optimize and visualize all this information, providing added value to archaeologists and heritage managers who want to use an efficient tool for their works at the site, and even for non-expert users who just want to know more about the archaeological settlement.

  11. Super resolution reconstruction of infrared images based on classified dictionary learning

    Science.gov (United States)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  12. Maxillofacial osseous reconstruction using the angular branch of the thoracodorsal vessels.

    LENUS (Irish Health Repository)

    Dolderer, Jürgen H

    2010-09-01

    Mandibular and maxillary resections can produce complex three-dimensional defects requiring skeletal, soft tissue, and epithelial reconstruction. The subscapular vascular axis offers a source of skin, bone, and muscle on a single pedicle for microvascular flap transfer. We reviewed four cases where the subscapular vascular pedicle was used as a source of tissue for complex facial reconstructions in maxillofacial defects. Reconstruction of these complex defects was performed with a latissimus dorsi muscle or myocutaneous flap in combination with the lateral border of the scapula, harvested on the angular branch of the thoracodorsal vessels. There were three cases of maxillectomy and one case of partial mandibulectomy for malignant tumors. In each case, the angular branch of the thoracodorsal artery supplied 6 to 8 cm of the lateral border of the scapula and a latissimus dorsi myocutaneous flap was used for soft tissue reconstruction. Follow-up ranged from 9 months to 3 years and in all cases there was successful bony union. Shoulder movement was normal. This series encourages the further use of subscapular axis flaps as flexible sources of combined myocutaneous and osseous flaps on a single vascular pedicle in cases of complex maxillofacial reconstruction.

  13. Three-dimensional image reconstruction. I. Determination of pattern orientation

    International Nuclear Information System (INIS)

    Blankenbecler, Richard

    2004-01-01

    The problem of determining the Euler angles of a randomly oriented three-dimensional (3D) object from its 2D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semidefinite 3D object using oversampling techniques. In such a problem, the data consist of a measured set of magnitudes from 2D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2D images, physical constraints on the image, and then iteration to determine the phases

  14. Tensor-based dictionary learning for dynamic tomographic reconstruction

    International Nuclear Information System (INIS)

    Tan, Shengqi; Wu, Zhifang; Zhang, Yanbo; Mou, Xuanqin; Wang, Ge; Cao, Guohua; Yu, Hengyong

    2015-01-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. (paper)

  15. Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction

    Science.gov (United States)

    Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong

    2015-01-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991

  16. Iterative and range test methods for an inverse source problem for acoustic waves

    International Nuclear Information System (INIS)

    Alves, Carlos; Kress, Rainer; Serranho, Pedro

    2009-01-01

    We propose two methods for solving an inverse source problem for time-harmonic acoustic waves. Based on the reciprocity gap principle a nonlinear equation is presented for the locations and intensities of the point sources that can be solved via Newton iterations. To provide an initial guess for this iteration we suggest a range test algorithm for approximating the source locations. We give a mathematical foundation for the range test and exhibit its feasibility in connection with the iteration method by some numerical examples

  17. 3D reconstruction, a new challenge in industrial radiography

    International Nuclear Information System (INIS)

    Lavayssiere, B.; Fleuet, E.; Georgel, B.

    1995-01-01

    In a NDT context, industrial radiography enables the detection of defects through their projection on a film. EDF has studied the benefit that may be brought in terms of localisation and orientation of the defects by the mean of 3D reconstruction using a very limited number of radiographs. The reconstruction issue consists of solving an integral equation of the first kind ; in a noisy context, the reconstruction belongs to the so-called ill-posed class of problem. Appropriate solutions may only be found with the help of regularization technique, by the introduction of a priori knowledge concerning the unknown solution and also by the use of a statistical modelization of the physical process which produces radiographs. Another approach simplifies the problem and reconstructs the skeleton of a defect only. All these methods coming from applied mathematical sciences enable a more precise diagnosis in non-destructive testing of thick inhomogeneous material. (authors). 4 refs., 4 figs

  18. New vertex reconstruction algorithms for CMS

    CERN Document Server

    Frühwirth, R; Prokofiev, Kirill; Speer, T.; Vanlaer, P.; Chabanat, E.; Estre, N.

    2003-01-01

    The reconstruction of interaction vertices can be decomposed into a pattern recognition problem (``vertex finding'') and a statistical problem (``vertex fitting''). We briefly review classical methods. We introduce novel approaches and motivate them in the framework of high-luminosity experiments like at the LHC. We then show comparisons with the classical methods in relevant physics channels

  19. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  20. New and renewable energy sources and the ecological problem. Developments from the Republic of Argentina

    International Nuclear Information System (INIS)

    Moragues, Jaime A.

    1992-01-01

    This paper focuses the renewable energy sources developments in Argentina. Every one of sources are described in details, including environmental aspects. The problems with energy demand, mainly in rural areas, are also presented. 9 figs., 3 tabs

  1. An investigation of the closure problem applied to reactor accident source terms

    International Nuclear Information System (INIS)

    Brearley, I.R.; Nixon, W.; Hayns, M.R.

    1987-01-01

    The closure problem, as considered here, focuses attention on the question of when in current research programmes enough has been learned about the source terms for reactor accident releases. Noting that current research is tending to reduce the estimated magnitude of the aerosol component of atmospheric, accidental releases, several possible criteria for closure are suggested. Moreover, using the reactor accident consequence model CRACUK, the effect of gradually reducing the aerosol release fractions of a pressurized water reactor (PWR2) source term (as defined in the WASH-1400 study) is investigated and the implications of applying the suggested criteria to current source term research discussed. (author)

  2. Titanium template for scaphoid reconstruction.

    Science.gov (United States)

    Haefeli, M; Schaefer, D J; Schumacher, R; Müller-Gerbl, M; Honigmann, P

    2015-06-01

    Reconstruction of a non-united scaphoid with a humpback deformity involves resection of the non-union followed by bone grafting and fixation of the fragments. Intraoperative control of the reconstruction is difficult owing to the complex three-dimensional shape of the scaphoid and the other carpal bones overlying the scaphoid on lateral radiographs. We developed a titanium template that fits exactly to the surfaces of the proximal and distal scaphoid poles to define their position relative to each other after resection of the non-union. The templates were designed on three-dimensional computed tomography reconstructions and manufactured using selective laser melting technology. Ten conserved human wrists were used to simulate the reconstruction. The achieved precision measured as the deviation of the surface of the reconstructed scaphoid from its virtual counterpart was good in five cases (maximal difference 1.5 mm), moderate in one case (maximal difference 3 mm) and inadequate in four cases (difference more than 3 mm). The main problems were attributed to the template design and can be avoided by improved pre-operative planning, as shown in a clinical case. © The Author(s) 2014.

  3. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    Science.gov (United States)

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Maxillary reconstruction: Current concepts and controversies

    Directory of Open Access Journals (Sweden)

    Subramania Iyer

    2014-01-01

    Full Text Available Maxillary reconstruction is still an evolving art when compared to the reconstruction of the mandible. The defects of maxilla apart from affecting the functions of the speech, swallowing and mastication also cause cosmetic disfigurement. Rehabilitation of the form and function in patients with maxillary defects is either by using an obturator prosthesis or by a surgical reconstruction. Literature is abundant with a variety of reconstructive methods. The classification systems are also varied, with no universal acceptance of any one of them. The oncologic safety of these procedures is still debated, and conclusive evidence in this regard has not emerged yet. Management of the orbit is also not yet addressed properly. Tissue engineering, that has been hyped to be one of the possible solutions for this vexing reconstructive problem, has not come out with reliable and reproducible results so far. This review article discusses the rationale and oncological safety of the reconstructing the maxillary defects, critically analyzes the classification systems, offers the different reconstructive methods and touches upon the controversies in this subject. The management of the retained and exenterated orbit associated with maxillectomy is reviewed. The surgical morbidity, complications and the recent advances in this field are also looked into. An algorithm, based on our experience, is presented.

  5. Maxillary reconstruction: Current concepts and controversies

    Science.gov (United States)

    Iyer, Subramania; Thankappan, Krishnakumar

    2014-01-01

    Maxillary reconstruction is still an evolving art when compared to the reconstruction of the mandible. The defects of maxilla apart from affecting the functions of the speech, swallowing and mastication also cause cosmetic disfigurement. Rehabilitation of the form and function in patients with maxillary defects is either by using an obturator prosthesis or by a surgical reconstruction. Literature is abundant with a variety of reconstructive methods. The classification systems are also varied, with no universal acceptance of any one of them. The oncologic safety of these procedures is still debated, and conclusive evidence in this regard has not emerged yet. Management of the orbit is also not yet addressed properly. Tissue engineering, that has been hyped to be one of the possible solutions for this vexing reconstructive problem, has not come out with reliable and reproducible results so far. This review article discusses the rationale and oncological safety of the reconstructing the maxillary defects, critically analyzes the classification systems, offers the different reconstructive methods and touches upon the controversies in this subject. The management of the retained and exenterated orbit associated with maxillectomy is reviewed. The surgical morbidity, complications and the recent advances in this field are also looked into. An algorithm, based on our experience, is presented. PMID:24987199

  6. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    Science.gov (United States)

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. 3D Tomographic Image Reconstruction using CUDA C

    International Nuclear Information System (INIS)

    Dominguez, J. S.; Assis, J. T.; Oliveira, L. F. de

    2011-01-01

    This paper presents the study and implementation of a software for three dimensional reconstruction of images obtained with a tomographic system using the capabilities of Graphic Processing Units(GPU). The reconstruction by filtered back-projection method was developed using the CUDA C, for maximum utilization of the processing capabilities of GPUs to solve computational problems with large computational cost and highly parallelizable. It was discussed the potential of GPUs and shown its advantages to solving this kind of problems. The results in terms of runtime will be compared with non-parallelized implementations and must show a great reduction of processing time. (Author)

  8. Beyond Open Source Software: Solving Common Library Problems Using the Open Source Hardware Arduino Platform

    Directory of Open Access Journals (Sweden)

    Jonathan Younker

    2013-06-01

    Full Text Available Using open source hardware platforms like the Arduino, libraries have the ability to quickly and inexpensively prototype custom hardware solutions to common library problems. The authors present the Arduino environment, what it is, what it does, and how it was used at the James A. Gibson Library at Brock University to create a production portable barcode-scanning utility for in-house use statistics collection as well as a prototype for a service desk statistics tabulation program’s hardware interface.

  9. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    Science.gov (United States)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be

  10. Do our reconstructions of ENSO have too much low-frequency variability?

    Science.gov (United States)

    Loope, G. R.; Overpeck, J. T.

    2017-12-01

    Reconstructing the spectrum of Pacific SST variability has proven to be difficult both because of complications with proxy systems such as tree rings and the relatively small number of records from the tropical Pacific. We show that the small number of long coral δ18O and Sr/Ca records has caused a bias towards having too much low-frequency variability in PCR, CPS, and RegEM reconstructions of Pacific variability. This occurs because the individual coral records used in the reconstructions have redder spectra than the shared signal (e.g. ENSO). This causes some of the unshared, low-frequency signal from local climate, salinity and possibly coral biology to bleed into the reconstruction. With enough chronologies in a reconstruction, this unshared noise cancels out but the problem is exacerbated in our longest reconstructions where fewer records are available. Coral proxies tend to have more low-frequency variability than SST observations so this problem is smaller but can still be seen in pseudoproxy experiments using observations and reanalysis data. The identification of this low-frequency bias in coral reconstructions helps bring the spectra of ENSO reconstructions back into line with both models and observations. Although our analysis is mostly constrained to the 20th century due to lack of sufficient data, we expect that as more long chronologies are developed, the low-frequency signal in ENSO reconstructions will be greatly reduced.

  11. Is mammary reconstruction with the anatomical Becker expander a simple procedure? Complications and hidden problems leading to secondary surgical procedures: a follow-up study.

    Science.gov (United States)

    Farace, Francesco; Faenza, Mario; Bulla, Antonio; Rubino, Corrado; Campus, Gian Vittorio

    2013-06-01

    Debate over the role of Becker expander implants (BEIs) in breast reconstruction is still ongoing. There are no clear indications for BEI use. The main indications for BEI use are one-stage breast reconstruction procedure and congenital breast deformities correction, due to the postoperative ability to vary BEI volume. Recent studies showed that BEIs were removed 5 years after mammary reconstruction in 68% of operated patients. This entails a further surgical procedure. BEIs should not, therefore, be regarded as one-stage prostheses. We performed a case-series study of breast reconstructions with anatomically shaped Becker-35™ implants, in order to highlight complications and to flag unseen problems, which might entail a second surgical procedure. A total of 229 patients, reconstructed from 2005 to 2010, were enrolled in this study. Data relating to implant type, volume, mean operative time and complications were recorded. All the patients underwent the same surgical procedure. The minimum follow-up period was 18 months. During a 5-year follow-up, 99 patients required secondary surgery to correct their complications or sequelae; 46 of them underwent BEI removal within 2 years of implantation, 56 within 3 years, 65 within 4 years and 74 within 5 years. Our findings show that two different sorts of complications can arise with these devices, leading to premature implant removal, one common to any breast implant and one peculiar to BEIs. The Becker implant is a permanent expander. Surgeons must, therefore, be aware that, once positioned, the Becker expander cannot be adjusted at a later date, as in two-stage expander/prosthesis reconstructions for instance. Surgeons must have a clear understanding of possible BEI complications in order to be able to discuss these with their patients. Therefore, only surgeons experienced in breast reconstruction should use BEIs. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by

  12. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  13. Hybrid spectral CT reconstruction

    Science.gov (United States)

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  14. Tomographic reconstruction of the time-averaged density distribution in two-phase flow

    International Nuclear Information System (INIS)

    Fincke, J.R.

    1982-01-01

    The technique of reconstructive tomography has been applied to the measurement of time-average density and density distribution in a two-phase flow field. The technique of reconstructive tomography provides a model-independent method of obtaining flow-field density information. A tomographic densitometer system for the measurement of two-phase flow has two unique problems: a limited number of data values and a correspondingly coarse reconstruction grid. These problems were studied both experimentally through the use of prototype hardware on a 3-in. pipe, and analytically through computer generation of simulated data. The prototype data were taken on phantoms constructed of all Plexiglas and Plexiglas laminated with wood and polyurethane foam. Reconstructions obtained from prototype data are compared with reconstructions from the simulated data. Also presented are some representative results in a horizontal air/water flow

  15. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    Science.gov (United States)

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  16. A genetic approach to shape reconstruction in limited data tomography

    International Nuclear Information System (INIS)

    Turcanu, C.; Craciunescu, T.

    2001-01-01

    The paper proposes a new method for shape reconstruction in computerized tomography. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited view angles . The problem of image reconstruction from projection may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by a genetic algorithm. The algorithm has some features common to all genetic algorithms but also some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated on a phantom typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. A genetic reconstruction is presented. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise. (authors)

  17. A GREIT-type linear reconstruction algorithm for EIT using eigenimages

    International Nuclear Information System (INIS)

    Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen

    2013-01-01

    Reconstruction in electrical impedance tomography (EIT) is a nonlinear, ill-posed inverse problem. Based on point-shaped training and evaluation data, the 'Graz consensus Reconstruction algorithm for EIT' (GREIT) constitutes a universal, homogenous method. While this is a very reasonable approach to the general problem, we ask the question if an optimized reconstruction method for a specific application of EIT, i.e. thoracic imaging, can be found. Instead of point-shaped training data we propose to use spatially extended training data consisting of eigenimages. To evaluate the quality of reconstruction of the proposed approach, figures of merit (FOMs) derived from the ones used in GREIT are developed. For the application of thoracic imaging, lung-shapes were segmented from a publicly available CT-database (www.dir-lab.com) and used to calculate the novel FOMs. With those, the general feasibility of using eigenimages is demonstrated and compared to the standard approach. In addition, it is shown that by using different sets of training data, the creation of an individually optimized linear method of reconstruction is possible.

  18. Prepectoral Implant-Based Breast Reconstruction

    Directory of Open Access Journals (Sweden)

    Lyndsey Highton, BMBCh, MA, FRCS(Plast

    2017-09-01

    Conclusion:. Prepectoral implant placement with ADM cover is emerging as an alternative approach for IBR. This method facilitates breast reconstruction with a good cosmetic outcome for patients who want a quick recovery without potential compromise of pectoral muscle function and associated problems.

  19. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  20. Solution to the monoenergetic time-dependent neutron transport equation with a time-varying source

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1986-01-01

    Even though fundamental time-dependent neutron transport problems have existed since the inception of neutron transport theory, it has only been recently that a reliable numerical solution to one of the basic problems has been obtained. Experience in generating numerical solutions to time-dependent transport equations has indicated that the multiple collision formulation is the most versatile numerical technique for model problems. The formulation coupled with a moment reconstruction of each collided flux component has led to benchmark-quality (four- to five-digit accuracy) numerical evaluation of the neutron flux in plane infinite geometry for any degree of scattering anisotropy and for both pulsed isotropic and beam sources. As will be shown in this presentation, this solution can serve as a Green's function, thus extending the previous results to more complicated source situations. Here we will be concerned with a time-varying source at the center of an infinite medium. If accurate, such solutions have both pedagogical and practical uses as benchmarks against which other more approximate solutions designed for a wider class of problems can be compared

  1. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  2. Fully 3D GPU PET reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L., E-mail: joaquin@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Departmento Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Cal-Gonzalez, J. [Grupo de Fisica Nuclear, Departmento Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Vaquero, J.J. [Departmento de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Desco, M. [Departmento de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Departmento Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Fully 3D iterative tomographic image reconstruction is computationally very demanding. Graphics Processing Unit (GPU) has been proposed for many years as potential accelerators in complex scientific problems, but it has not been used until the recent advances in the programmability of GPUs that the best available reconstruction codes have started to be implemented to be run on GPUs. This work presents a GPU-based fully 3D PET iterative reconstruction software. This new code may reconstruct sinogram data from several commercially available PET scanners. The most important and time-consuming parts of the code, the forward and backward projection operations, are based on an accurate model of the scanner obtained with the Monte Carlo code PeneloPET and they have been massively parallelized on the GPU. For the PET scanners considered, the GPU-based code is more than 70 times faster than a similar code running on a single core of a fast CPU, obtaining in both cases the same images. The code has been designed to be easily adapted to reconstruct sinograms from any other PET scanner, including scanner prototypes.

  3. Fully 3D GPU PET reconstruction

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Cal-Gonzalez, J.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2011-01-01

    Fully 3D iterative tomographic image reconstruction is computationally very demanding. Graphics Processing Unit (GPU) has been proposed for many years as potential accelerators in complex scientific problems, but it has not been used until the recent advances in the programmability of GPUs that the best available reconstruction codes have started to be implemented to be run on GPUs. This work presents a GPU-based fully 3D PET iterative reconstruction software. This new code may reconstruct sinogram data from several commercially available PET scanners. The most important and time-consuming parts of the code, the forward and backward projection operations, are based on an accurate model of the scanner obtained with the Monte Carlo code PeneloPET and they have been massively parallelized on the GPU. For the PET scanners considered, the GPU-based code is more than 70 times faster than a similar code running on a single core of a fast CPU, obtaining in both cases the same images. The code has been designed to be easily adapted to reconstruct sinograms from any other PET scanner, including scanner prototypes.

  4. A hybrid algorithm for stochastic single-source capacitated facility location problem with service level requirements

    Directory of Open Access Journals (Sweden)

    Hosseinali Salemi

    2016-04-01

    Full Text Available Facility location models are observed in many diverse areas such as communication networks, transportation, and distribution systems planning. They play significant role in supply chain and operations management and are one of the main well-known topics in strategic agenda of contemporary manufacturing and service companies accompanied by long-lasting effects. We define a new approach for solving stochastic single source capacitated facility location problem (SSSCFLP. Customers with stochastic demand are assigned to set of capacitated facilities that are selected to serve them. It is demonstrated that problem can be transformed to deterministic Single Source Capacitated Facility Location Problem (SSCFLP for Poisson demand distribution. A hybrid algorithm which combines Lagrangian heuristic with adjusted mixture of Ant colony and Genetic optimization is proposed to find lower and upper bounds for this problem. Computational results of various instances with distinct properties indicate that proposed solving approach is efficient.

  5. Reconstruction Methods for Inverse Problems with Partial Data

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer

    This thesis presents a theoretical and numerical analysis of a general mathematical formulation of hybrid inverse problems in impedance tomography. This includes problems from several existing hybrid imaging modalities such as Current Density Impedance Imaging, Magnetic Resonance Electrical...... Impedance Tomography, and Ultrasound Modulated Electrical Impedance Tomography. After giving an introduction to hybrid inverse problems in impedance tomography and the mathematical tools that facilitate the related analysis, we explain in detail the stability properties associated with the classification...... of a linearised hybrid inverse problem. This is done using pseudo-differential calculus and theory for overdetermined boundary value problem. Using microlocal analysis we then present novel results on the propagation of singularities, which give a precise description of the distinct features of solutions...

  6. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    Science.gov (United States)

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  7. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  8. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  9. Labral reconstruction: when to perform and how

    Directory of Open Access Journals (Sweden)

    Brian J White

    2015-07-01

    Full Text Available Over the past decade, the understanding of the anatomy and function of the hip joint has continuously evolved, and surgical treatment options for the hip have significantly progressed. Originally, surgical treatment of the hip primarily involved resection of damaged tissue. Procedures that maintain and preserve proper hip anatomy, such as labral repair and femoroacetabular impingement (FAI correction, have shown superior results, in terms of pain reduction, increased function, and ability to return to activities. Labral reconstruction is a treatment option that uses a graft to reconstruct the native labrum. The technique and outcomes of labral reconstruction have been described relatively recently, and labral reconstruction is a cutting edge procedure that has shown promising early outcomes. The aim of this article is to review the current literature on hip labral reconstruction. We will review the indications for labral reconstruction, surgical technique and graft options, and surgical outcomes that have been described to date. Labral reconstruction provides an alternative treatment option for challenging intra-articular hip problems. Labral reconstruction restores the original anatomy of the hip and has the potential to preserve the longevity of the hip joint. This technique is an important tool in the orthopaedic surgeon’s arsenal for hip joint treatment and preservation.

  10. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  11. Social problems as sources of opportunity – antecedents of social entrepreneurship opportunities

    Directory of Open Access Journals (Sweden)

    Agnieszka Żur

    2016-02-01

    Full Text Available Objective: Based on extensive literature review, this paper aims to establish if, why and how, in given environmental and market contexts, social entrepreneurship (SE opportunities are discovered and exploited. It positions social problems as sources of entrepreneurial opportunity. The article presents an integrated process-based view of SE opportunity antecedents and concludes with a dynamic model of SE opportunity. Research Design & Methods: To fulfil its goal, the paper establishes opportunity as unit of research and explores the dynamics of opportunity recognition. To identify the components of SE opportunity through a process-based view, the study follows the steps of critical literature review method. The literature review follows with logical reasoning and inference, which results in the formulation of a model proposition of social entrepreneurship opportunity. Findings: The paper presents a holistic perspective on opportunity antecedents in SE context and introduces social problems, information, social awareness and entrepreneurial mindset  as fundamental components of social entrepreneurship opportunity equation. Implications & Recommendations: It is necessary to remember for policy makers, investors and partners involved within the social sector, that social problems can be the source of entrepreneurial opportunity. Training, assisting and engaging socially aware entrepreneurs is a promising line of development for all communities. Contribution & Value Added: The major contribution of this study lies in extending the existing body of social entrepreneurship research by providing a new perspective, placing social problem as opportunity in the centre of the discussion.

  12. Profile reconstruction from neutron reflectivity data and a priori knowledge

    International Nuclear Information System (INIS)

    Leeb, H.

    2008-01-01

    The problem of incomplete and noisy information in profile reconstruction from neutron reflectometry data is considered. In particular methods of Bayesian statistics in combination with modelling or inverse scattering techniques are considered in order to properly include the required a priori knowledge to obtain quantitatively reliable estimates of the reconstructed profiles. Applying Bayes theorem the results of different experiments on the same sample can be consistently included in the profile reconstruction

  13. Image Reconstruction Based on Homotopy Perturbation Inversion Method for Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available The image reconstruction for electrical impedance tomography (EIT mathematically is a typed nonlinear ill-posed inverse problem. In this paper, a novel iteration regularization scheme based on the homotopy perturbation technique, namely, homotopy perturbation inversion method, is applied to investigate the EIT image reconstruction problem. To verify the feasibility and effectiveness, simulations of image reconstruction have been performed in terms of considering different locations, sizes, and numbers of the inclusions, as well as robustness to data noise. Numerical results indicate that this method can overcome the numerical instability and is robust to data noise in the EIT image reconstruction. Moreover, compared with the classical Landweber iteration method, our approach improves the convergence rate. The results are promising.

  14. Diffusion archeology for diffusion progression history reconstruction.

    Science.gov (United States)

    Sefer, Emre; Kingsford, Carl

    2016-11-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.

  15. Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations

    Science.gov (United States)

    Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro

    2017-05-01

    In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.

  16. Diagnostic accuracy of second-generation dual-source computed tomography coronary angiography with iterative reconstructions: a real-world experience.

    Science.gov (United States)

    Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F

    2012-08-01

    The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.

  17. Reconstructing Neutrino Mass Spectrum

    OpenAIRE

    Smirnov, A. Yu.

    1999-01-01

    Reconstruction of the neutrino mass spectrum and lepton mixing is one of the fundamental problems of particle physics. In this connection we consider two central topics: (i) the origin of large lepton mixing, (ii) possible existence of new (sterile) neutrino states. We discuss also possible relation between large mixing and existence of sterile neutrinos.

  18. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  19. A phylogenetic Kalman filter for ancestral trait reconstruction using molecular data.

    Science.gov (United States)

    Lartillot, Nicolas

    2014-02-15

    Correlation between life history or ecological traits and genomic features such as nucleotide or amino acid composition can be used for reconstructing the evolutionary history of the traits of interest along phylogenies. Thus far, however, such ancestral reconstructions have been done using simple linear regression approaches that do not account for phylogenetic inertia. These reconstructions could instead be seen as a genuine comparative regression problem, such as formalized by classical generalized least-square comparative methods, in which the trait of interest and the molecular predictor are represented as correlated Brownian characters coevolving along the phylogeny. Here, a Bayesian sampler is introduced, representing an alternative and more efficient algorithmic solution to this comparative regression problem, compared with currently existing generalized least-square approaches. Technically, ancestral trait reconstruction based on a molecular predictor is shown to be formally equivalent to a phylogenetic Kalman filter problem, for which backward and forward recursions are developed and implemented in the context of a Markov chain Monte Carlo sampler. The comparative regression method results in more accurate reconstructions and a more faithful representation of uncertainty, compared with simple linear regression. Application to the reconstruction of the evolution of optimal growth temperature in Archaea, using GC composition in ribosomal RNA stems and amino acid composition of a sample of protein-coding genes, confirms previous findings, in particular, pointing to a hyperthermophilic ancestor for the kingdom. The program is freely available at www.phylobayes.org.

  20. A response matrix method for slab-geometry discrete ordinates adjoint calculations in energy-dependent source-detector problems

    Energy Technology Data Exchange (ETDEWEB)

    Mansur, Ralph S.; Moura, Carlos A., E-mail: ralph@ime.uerj.br, E-mail: demoura@ime.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Departamento de Engenharia Mecanica; Barros, Ricardo C., E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Departamento de Modelagem Computacional

    2017-07-01

    Presented here is an application of the Response Matrix (RM) method for adjoint discrete ordinates (S{sub N}) problems in slab geometry applied to energy-dependent source-detector problems. The adjoint RM method is free from spatial truncation errors, as it generates numerical results for the adjoint angular fluxes in multilayer slabs that agree with the numerical values obtained from the analytical solution of the energy multigroup adjoint SN equations. Numerical results are given for two typical source-detector problems to illustrate the accuracy and the efficiency of the offered RM computer code. (author)

  1. Resolution effects in reconstructing ancestral genomes.

    Science.gov (United States)

    Zheng, Chunfang; Jeong, Yuji; Turcotte, Madisyn Gabrielle; Sankoff, David

    2018-05-09

    The reconstruction of ancestral genomes must deal with the problem of resolution, necessarily involving a trade-off between trying to identify genomic details and being overwhelmed by noise at higher resolutions. We use the median reconstruction at the synteny block level, of the ancestral genome of the order Gentianales, based on coffee, Rhazya stricta and grape, to exemplify the effects of resolution (granularity) on comparative genomic analyses. We show how decreased resolution blurs the differences between evolving genomes, with respect to rate, mutational process and other characteristics.

  2. Parallel MR image reconstruction using augmented Lagrangian methods.

    Science.gov (United States)

    Ramani, Sathish; Fessler, Jeffrey A

    2011-03-01

    Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.

  3. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  4. Geometric reconstruction methods for electron tomography

    International Nuclear Information System (INIS)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand

  5. [Mandibular reconstruction with fibula free flap. Experience of virtual reconstruction using Osirix®, a free and open source software for medical imagery].

    Science.gov (United States)

    Albert, S; Cristofari, J-P; Cox, A; Bensimon, J-L; Guedon, C; Barry, B

    2011-12-01

    The techniques of free tissue transfers are mainly used for mandibular reconstruction by specialized surgical teams. This type of reconstruction is mostly realized in matters of head and neck cancers affecting mandibular bone and requiring a wide surgical resection and interruption of the mandible. To decrease the duration of the operation, surgical procedure involves generally two teams, one devoted to cancer resection and the other one to raise the fibular flap and making the reconstruction. For a better preparation of this surgical procedure, we propose here the use of a medical imaging software enabling mandibular reconstructions in three dimensions using the CT-scan done during the initial disease-staging checkup. The software used is Osirix®, developed since 2004 by a team of radiologists from Geneva and UCLA, working on Apple® computers and downloadable free of charge in its basic version. We report here our experience of this software in 17 patients, with a preoperative modelling in three dimensions of the mandible, of the segment of mandible to be removed. It also forecasts the numbers of fragments of fibula needed and the location of osteotomies. Copyright © 2009 Elsevier Masson SAS. All rights reserved.

  6. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  7. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran; Desmal, Abdulla; Bagci, Hakan

    2016-01-01

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile's derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  8. Fast half-sibling population reconstruction: theory and algorithms.

    Science.gov (United States)

    Dexter, Daniel; Brown, Daniel G

    2013-07-12

    Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.

  9. Automatic Texture Optimization for 3D Urban Reconstruction

    Directory of Open Access Journals (Sweden)

    LI Ming

    2017-03-01

    Full Text Available In order to solve the problem of texture optimization in 3D city reconstruction by using multi-lens oblique images, the paper presents a method of seamless texture model reconstruction. At first, it corrects the radiation information of images by camera response functions and image dark channel. Then, according to the corresponding relevance between terrain triangular mesh surface model to image, implements occlusion detection by sparse triangulation method, and establishes the triangles' texture list of visible. Finally, combines with triangles' topology relationship in 3D triangular mesh surface model and means and variances of image, constructs a graph-cuts-based texture optimization algorithm under the framework of MRF(Markov random filed, to solve the discrete label problem of texture optimization selection and clustering, ensures the consistency of the adjacent triangles in texture mapping, achieves the seamless texture reconstruction of city. The experimental results verify the validity and superiority of our proposed method.

  10. Reconstruction of network topology using status-time-series data

    Science.gov (United States)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  11. Source-space ICA for MEG source imaging.

    Science.gov (United States)

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  12. Development of a numerical experiment technique to solve inverse gamma-ray transport problems with application to nondestructive assay of nuclear waste barrels

    International Nuclear Information System (INIS)

    Chang, C.J.; Anghaie, S.

    1998-01-01

    A numerical experimental technique is presented to find an optimum solution to an undetermined inverse gamma-ray transport problem involving the nondestructive assay of radionuclide inventory in a nuclear waste drum. The method introduced is an optimization scheme based on performing a large number of numerical simulations that account for the counting statistics, the nonuniformity of source distribution, and the heterogeneous density of the self-absorbing medium inside the waste drum. The simulation model uses forward projection and backward reconstruction algorithms. The forward projection algorithm uses randomly selected source distribution and a first-flight kernel method to calculate external detector responses. The backward reconstruction algorithm uses the conjugate gradient with nonnegative constraint or the maximum likelihood expectation maximum method to reconstruct the source distribution based on calculated detector responses. Total source activity is determined by summing the reconstructed activity of each computational grid. By conducting 10,000 numerical simulations, the error bound and the associated confidence level for the prediction of total source activity are determined. The accuracy and reliability of the simulation model are verified by performing a series of experiments in a 208-ell waste barrel. Density heterogeneity is simulated by using different materials distributed in 37 egg-crate-type compartments simulating a vertical segment of the barrel. Four orthogonal detector positions are used to measure the emerging radiation field from the distributed source. Results of the performed experiments are in full agreement with the estimated error and the confidence level, which are predicted by the simulation model

  13. Image reconstruction of fluorescent molecular tomography based on the tree structured Schur complement decomposition

    Directory of Open Access Journals (Sweden)

    Wang Jiajun

    2010-05-01

    Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.

  14. Spiral scan long object reconstruction through PI line reconstruction

    International Nuclear Information System (INIS)

    Tam, K C; Hu, J; Sourbelle, K

    2004-01-01

    The response of a point object in a cone beam (CB) spiral scan is analysed. Based on the result, a reconstruction algorithm for long object imaging in spiral scan cone beam CT is developed. A region-of-interest (ROI) of the long object is scanned with a detector smaller than the ROI, and a portion of it can be reconstructed without contamination from overlaying materials. The top and bottom surfaces of the ROI are defined by two sets of PI lines near the two ends of the spiral path. With this novel definition of the top and bottom ROI surfaces and through the use of projective geometry, it is straightforward to partition the cone beam image into regions corresponding to projections of the ROI, the overlaying objects or both. This also simplifies computation at source positions near the spiral ends, and makes it possible to reduce radiation exposure near the spiral ends substantially through simple hardware collimation. Simulation results to validate the algorithm are presented

  15. Reconstruction of Clear-PEM data with STIR

    CERN Document Server

    Martins, M V; Rodrigues, P; Trindade, A; Oliveira, N; Correia, M; Cordeiro, H; Ferreira, N C; Varela, J; Almeida, P

    2006-01-01

    The Clear-PEM scanner is a device based on planar detectors that is currently under development within the Crystal Clear Collaboration, at CERN. The basis for 3D image reconstruction in Clear-PEM is the software for tomographic image reconstruction (STIR). STIR is an open source object-oriented library that efficiently deals with the 3D positron emission tomography data sets. This library was originally designed for the traditional cylindrical scanners. In order to make its use compatible with planar scanner data, new functionalities were introduced into the library's framework. In this work, Monte Carlo simulations of the Clear-PEM scanner acquisitions were used as input for image reconstruction with the 3D OSEM algorithm available in STIR. The results presented indicate that dual plate PEM data can be accurately reconstructed using the enhanced STIR framework.

  16. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  17. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  18. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  19. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    International Nuclear Information System (INIS)

    Agaltsov, A. D.; Novikov, R. G.

    2014-01-01

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given

  20. Silicon and Germanium (111) Surface Reconstruction

    Science.gov (United States)

    Hao, You Gong

    Silicon (111) surface (7 x 7) reconstruction has been a long standing puzzle. For the last twenty years, various models were put forward to explain this reconstruction, but so far the problem still remains unsolved. Recent ion scattering and channeling (ISC), scanning tunneling microscopy (STM) and transmission electron diffraction (TED) experiments reveal some new results about the surface which greatly help investigators to establish better models. This work proposes a silicon (111) surface reconstruction mechanism, the raising and lowering mechanism which leads to benzene -like ring and flower (raised atom) building units. Based on these building units a (7 x 7) model is proposed, which is capable of explaining the STM and ISC experiment and several others. Furthermore the building units of the model can be used naturally to account for the germanium (111) surface c(2 x 8) reconstruction and other observed structures including (2 x 2), (5 x 5) and (7 x 7) for germanium as well as the (/3 x /3)R30 and (/19 x /19)R23.5 impurity induced structures for silicon, and the higher temperature disordered (1 x 1) structure for silicon. The model is closely related to the silicon (111) surface (2 x 1) reconstruction pi-bonded chain model, which is the most successful model for the reconstruction now. This provides an explanation for the rather low conversion temperature (560K) of the (2 x 1) to the (7 x 7). The model seems to meet some problems in the explanation of the TED result, which is explained very well by the dimer, adatom and stacking fault (DAS) model proposed by Takayanagi. In order to explain the TED result, a variation of the atomic scattering factor is proposed. Comparing the benzene-like ring model with the DAS model, the former needs more work to explain the TED result and the later has to find a way to explain the silicon (111) surface (1 x 1) disorder experiment.

  1. Analyzing octopus movements using three-dimensional reconstruction.

    Science.gov (United States)

    Yekutieli, Yoram; Mitelman, Rea; Hochner, Binyamin; Flash, Tamar

    2007-09-01

    Octopus arms, as well as other muscular hydrostats, are characterized by a very large number of degrees of freedom and a rich motion repertoire. Over the years, several attempts have been made to elucidate the interplay between the biomechanics of these organs and their control systems. Recent developments in electrophysiological recordings from both the arms and brains of behaving octopuses mark significant progress in this direction. The next stage is relating these recordings to the octopus arm movements, which requires an accurate and reliable method of movement description and analysis. Here we describe a semiautomatic computerized system for 3D reconstruction of an octopus arm during motion. It consists of two digital video cameras and a PC computer running custom-made software. The system overcomes the difficulty of extracting the motion of smooth, nonrigid objects in poor viewing conditions. Some of the trouble is explained by the problem of light refraction in recording underwater motion. Here we use both experiments and simulations to analyze the refraction problem and show that accurate reconstruction is possible. We have used this system successfully to reconstruct different types of octopus arm movements, such as reaching and bend initiation movements. Our system is noninvasive and does not require attaching any artificial markers to the octopus arm. It may therefore be of more general use in reconstructing other nonrigid, elongated objects in motion.

  2. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  3. Prosthetic breast reconstruction: indications and update

    Science.gov (United States)

    Quinn, Tam T.; Miller, George S.; Rostek, Marie; Cabalag, Miguel S.; Rozen, Warren M.

    2016-01-01

    Background Despite 82% of patients reporting psychosocial improvement following breast reconstruction, only 33% patients choose to undergo surgery. Implant reconstruction outnumbers autologous reconstruction in many centres. Methods A systematic review of the literature was undertaken. Inclusion required: (I) Meta-analyses or review articles; (II) adult patients aged 18 years or over undergoing alloplastic breast reconstruction; (III) studies including outcome measures; (IV) case series with more than 10 patients; (V) English language; and (VI) publication after 1st January, 2000. Results After full text review, analysis and data extraction was conducted for a total of 63 articles. Definitive reconstruction with an implant can be immediate or delayed. Older patients have similar or even lower complication rates to younger patients. Complications include capsular contracture, hematoma and infection. Obesity, smoking, large breasts, diabetes and higher grade tumors are associated with increased risk of wound problems and reconstructive failure. Silicone implant patients have higher capsular contracture rates but have higher physical and psychosocial function. There were no associations made between silicone implants and cancer or systemic disease. There were no differences in outcomes or complications between round and shaped implants. Textured implants have a lower risk of capsular contracture than smooth implants. Smooth implants are more likely to be displaced as well as having higher rates of infection. Immediate breast reconstruction (IBR) gives the best aesthetic outcome if radiotherapy is not required but has a higher rate of capsular contracture and implant failure. Delayed-immediate reconstruction patients can achieve similar aesthetic results to IBR whilst preserving the breast skin if radiotherapy is required. Delayed breast reconstruction (DBR) patients have fewer complications than IBR patients. Conclusions Implant reconstruction is a safe and popular

  4. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    Science.gov (United States)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  5. Inverse scattering problems with multi-frequencies

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi

    2015-01-01

    This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)

  6. Polynomials, Riemann surfaces, and reconstructing missing-energy events

    CERN Document Server

    Gripaios, Ben; Webber, Bryan

    2011-01-01

    We consider the problem of reconstructing energies, momenta, and masses in collider events with missing energy, along with the complications introduced by combinatorial ambiguities and measurement errors. Typically, one reconstructs more than one value and we show how the wrong values may be correlated with the right ones. The problem has a natural formulation in terms of the theory of Riemann surfaces. We discuss examples including top quark decays in the Standard Model (relevant for top quark mass measurements and tests of spin correlation), cascade decays in models of new physics containing dark matter candidates, decays of third-generation leptoquarks in composite models of electroweak symmetry breaking, and Higgs boson decay into two tau leptons.

  7. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    Science.gov (United States)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  8. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  9. Spectrotemporal CT data acquisition and reconstruction at low dose

    International Nuclear Information System (INIS)

    Clark, Darin P.; Badea, Cristian T.; Lee, Chang-Lung; Kirsch, David G.

    2015-01-01

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction

  10. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  11. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  12. Special Inspector General for Iraq Reconstruction. Quarterly Report to the United States Congress

    National Research Council Canada - National Science Library

    Bowen, Jr, Stuart W

    2007-01-01

    .... relief and reconstruction program in Iraq. Two notable developments frame this Report. First, total relief and reconstruction investment for Iraq from all sources the United States, Iraq, and other donors passed...

  13. Evaluation of the spline reconstruction technique for PET

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  14. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-10

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data estimation problems for Laplace equation over the 3D domain. The solution of each of these boundary estimation problems involves writing down the mathematical problem in state-space-like representation using one of the space variables as time-like. First, system observability result for 3D boundary estimation problem is recalled in an infinite dimensional setting. Then, based on the observability result, the boundary estimation problem is decomposed into a set of independent 2D sub-problems. These 2D problems are then solved using an iterative observer to obtain the solution. Theoretical results are provided. The method is implemented numerically using finite difference discretization schemes. Numerical illustrations along with simulation results are provided.

  15. On some control problems of dynamic of reactor

    Science.gov (United States)

    Baskakov, A. V.; Volkov, N. P.

    2017-12-01

    The paper analyzes controllability of the transient processes in some problems of nuclear reactor dynamics. In this case, the mathematical model of nuclear reactor dynamics is described by a system of integro-differential equations consisting of the non-stationary anisotropic multi-velocity kinetic equation of neutron transport and the balance equation of delayed neutrons. The paper defines the formulation of the linear problem on control of transient processes in nuclear reactors with application of spatially distributed actions on internal neutron sources, and the formulation of the nonlinear problems on control of transient processes with application of spatially distributed actions on the neutron absorption coefficient and the neutron scattering indicatrix. The required control actions depend on the spatial and velocity coordinates. The theorems on existence and uniqueness of these control actions are proved in the paper. To do this, the control problems mentioned above are reduced to equivalent systems of integral equations. Existence and uniqueness of the solution for this system of integral equations is proved by the method of successive approximations, which makes it possible to construct an iterative scheme for numerical analyses of transient processes in a given nuclear reactor with application of the developed mathematical model. Sufficient conditions for controllability of transient processes are also obtained. In conclusion, a connection is made between the control problems and the observation problems, which, by to the given information, allow us to reconstruct either the function of internal neutron sources, or the neutron absorption coefficient, or the neutron scattering indicatrix....

  16. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  17. Image reconstruction under non-Gaussian noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica

    During acquisition and transmission, images are often blurred and corrupted by noise. One of the fundamental tasks of image processing is to reconstruct the clean image from a degraded version. The process of recovering the original image from the data is an example of inverse problem. Due...... to the ill-posedness of the problem, the simple inversion of the degradation model does not give any good reconstructions. Therefore, to deal with the ill-posedness it is necessary to use some prior information on the solution or the model and the Bayesian approach. Additive Gaussian noise has been......D thesis intends to solve some of the many open questions for image restoration under non-Gaussian noise. The two main kinds of noise studied in this PhD project are the impulse noise and the Cauchy noise. Impulse noise is due to for instance the malfunctioning pixel elements in the camera sensors, errors...

  18. Reconstruction of chaotic signals with applications to chaos-based communications

    CERN Document Server

    Feng, Jiu Chao

    2008-01-01

    This book provides a systematic review of the fundamental theory of signal reconstruction and the practical techniques used in reconstructing chaotic signals. Specific applications of signal reconstruction methods in chaos-based communications are expounded in full detail, along with examples illustrating the various problems associated with such applications.The book serves as an advanced textbook for undergraduate and graduate courses in electronic and information engineering, automatic control, physics and applied mathematics. It is also highly suited for general nonlinear scientists who wi

  19. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    Science.gov (United States)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  20. Electrical Impedance Tomography: 3D Reconstructions using Scattering Transforms

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

    2012-01-01

    In three dimensions the Calderon problem was addressed and solved in theory in the 1980s. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conductivity equation and a (non-physical) scattering transform. The resulting reconstruction algorithm...

  1. Electron bunch profile reconstruction in the few fs regime using coherent Smith-Purcell radiation

    International Nuclear Information System (INIS)

    Bartolini, R; Delerue, N; Doucas, G; Reichold, A; Clarke, C

    2012-01-01

    Advanced accelerators for fourth generation light sources based on high brightness linacs or laser-driven wakefield accelerators will operate with intense, highly relativistic electron bunches that are only a few fs long. Diagnostic techniques for the determination of temporal profile of such bunches are required to be non invasive, single shot, economic and with the required resolution in the fs regime. The use of a radiative process such as coherent Smith-Purcell radiation (SPR), is particularly promising with this respect. In this technique the beam is made to radiate a small amount of electromagnetic radiation and the temporal profile is reconstructed from the measured spectral distribution of the radiation. We summarise the advantages of SPR and present the design parameters and preliminary results of the experiments at the FACET facility at SLAC. We also discuss a new approach to the problem of the recovery of the 'missing phase', which is essential for the accurate reconstruction of the temporal bunch profile.

  2. Electron Bunch Profile Reconstruction in the Few fs Regime using Coherent Smith-Purcell Radiation

    International Nuclear Information System (INIS)

    Bartolini, R.; Clarke, C.; Delerue, N.; Doucasa, G.; Reicholda, A.

    2012-01-01

    Advanced accelerators for fourth generation light sources based on high brightness linacs or laser-driven wakefield accelerators will operate with intense, highly relativistic electron bunches that are only a few fs long. Diagnostic techniques for the determination of temporal profile of such bunches are required to be non invasive, single shot, economic and with the required resolution in the fs regime. The use of a radiative process such as coherent Smith-Purcell radiation (SPR), is particularly promising with this respect. In this technique the beam is made to radiate a small amount of electromagnetic radiation and the temporal profile is reconstructed from the measured spectral distribution of the radiation. We summarise the advantages of SPR and present the design parameters and preliminary results of the experiments at the FACET facility at SLAC. We also discuss a new approach to the problem of the recovery of the 'missing phase', which is essential for the accurate reconstruction of the temporal bunch profile.

  3. Exact reconstruction in 2D dynamic CT: compensation of time-dependent affine deformations

    International Nuclear Information System (INIS)

    Roux, Sebastien; Desbat, Laurent; Koenig, Anne; Grangeat, Pierre

    2004-01-01

    This work is dedicated to the reduction of reconstruction artefacts due to motion occurring during the acquisition of computerized tomographic projections. This problem has to be solved when imaging moving organs such as the lungs or the heart. The proposed method belongs to the class of motion compensation algorithms, where the model of motion is included in the reconstruction formula. We address two fundamental questions. First what conditions on the deformation are required for the reconstruction of the object from projections acquired sequentially during the deformation, and second how do we reconstruct the object from those projections. Here we answer these questions in the particular case of 2D general time-dependent affine deformations, assuming the motion parameters are known. We treat the problem of admissibility conditions on the deformation in the parallel-beam and fan-beam cases. Then we propose exact reconstruction methods based on rebinning or sequential FBP formulae for each of these geometries and present reconstructed images obtained with the fan-beam algorithm on simulated data

  4. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    Science.gov (United States)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  5. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    International Nuclear Information System (INIS)

    Young, S; Hoffman, J; McNitt-Gray, M; Noo, F

    2016-01-01

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with a quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low

  6. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    Energy Technology Data Exchange (ETDEWEB)

    Young, S; Hoffman, J; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States); Noo, F [University of Utah, Salt Lake City, UT (United States)

    2016-06-15

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with a quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low

  7. Detector independent cellular automaton algorithm for track reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kisel, Ivan; Kulakov, Igor; Zyzak, Maksym [Goethe Univ. Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration

    2013-07-01

    Track reconstruction is one of the most challenging problems of data analysis in modern high energy physics (HEP) experiments, which have to process per second of the order of 10{sup 7} events with high track multiplicity and density, registered by detectors of different types and, in many cases, located in non-homogeneous magnetic field. Creation of reconstruction package common for all experiments is considered to be important in order to consolidate efforts. The cellular automaton (CA) track reconstruction approach has been used successfully in many HEP experiments. It is very simple, efficient, local and parallel. Meanwhile it is intrinsically independent of detector geometry and good candidate for common track reconstruction. The CA implementation for the CBM experiment has been generalized and applied to the ALICE ITS and STAR HFT detectors. Tests with simulated collisions have been performed. The track reconstruction efficiencies are at the level of 95% for majority of the signal tracks for all detectors.

  8. Optimization of reconstruction algorithms using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by a iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a nonnegativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. 11 refs., 6 figs., 2 tabs

  9. Optimization of reconstruction algorithms using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by an iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a non-negativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. (author)

  10. [Reconstructive surgery of cranio-orbital injuries].

    Science.gov (United States)

    Eolchiian, S A; Potapov, A A; Serova, N K; Kataev, M G; Sergeeva, L A; Zakharova, N E; Van Damm, P

    2011-01-01

    The aim of study was to optimize evaluation and surgery of cranioorbital injuries in different periods after trauma. Material and methods. We analyzed 374 patients with cranioorbital injuries treated in Burdenko Neurosurgery Institute in different periods after trauma from January 1998 till April 2010. 288 (77%) underwent skull and facial skeleton reconstructive surgery within 24 hours - 7 years after trauma. Clinical and CT examination data were used for preoperative planning and assessment of surgery results. Stereolithographic models (STLM) were applied for preoperative planning in 89 cases. The follow-up period ranged from 4 months up to 10 years. Results. In 254 (88%) of 288 patients reconstruction of anterior skull base, upper and/or midface with restoration of different parts of orbit was performed. Anterior skull base CSF leaks repair, calvarial vault reconstruction, maxillar and mandibular osteosynthesis were done in 34 (12%) cases. 242 (84%) of 288 patients underwent one reconstructive operation, while 46 (16%)--two and more (totally 105 operations). The patients with extended frontoorbital and midface fractures commonly needed more than one operation--in 27 (62.8%) cases. Different plastic materials were used for reconstruction in 233 (80.9%) patients, of those in 147 (51%) cases split calvarial bone grafts were preferred. Good functional and cosmetic results were achieved in 261 (90.6%) of 288 patients while acceptable were observed in 27 (9.4%). Conclusion. Active single-stage surgical management for repair of combined cranioorbital injury in acute period with primary reconstruction optimizes functional and cosmetic outcomes and prevents the problems of delayed or secondary reconstruction. Severe extended anterior skull base, upper and midface injuries when intracranial surgery is needed produced the most challenging difficulties for adequate reconstruction. Randomized trial is required to define the extent and optimal timing of reconstructive surgery

  11. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    Science.gov (United States)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  12. Inspection of freeform surfaces considering uncertainties in measurement, localization and surface reconstruction

    International Nuclear Information System (INIS)

    Mehrad, Vahid; Xue, Deyi; Gu, Peihua

    2013-01-01

    Inspection of a manufactured freeform surface can be conducted by building its surface model and comparing this manufactured surface model with the ideal design surface model and its tolerance requirement. The manufactured freeform surface model is usually achieved by obtaining measurement points on the manufactured surface, transforming these measurement points from the measurement coordinate system to the design coordinate system through localization, and reconstructing the surface model using the localized measurement points. In this research, a method was developed to estimate the locations and their variances of any selected points on the reconstructed freeform surface considering different sources of uncertainties in measurement, localization and surface reconstruction processes. In this method, first locations and variances of the localized measurement points are calculated considering uncertainties of the measurement points and uncertainties introduced in the localization processes. Then locations and variances of points on the reconstructed freeform surface are obtained considering uncertainties of the localized measurement points and uncertainties introduced in the freeform surface reconstruction process. Two case studies were developed to demonstrate how these three different uncertainty sources influence the quality of the reconstructed freeform curve and freeform surface in inspection. (paper)

  13. The solar neutrino problem after the GALLEX artificial neutrino source experiment

    International Nuclear Information System (INIS)

    Vignaud, D.

    1995-01-01

    Using an intense 51 Cr artificial neutrino source (more than 60 PBq), the GALLEX solar neutrino collaboration has recently checked that its radiochemical detector was fully efficient for the detection of solar neutrinos. After this crucial result, the status of the solar neutrino problem is reviewed, with emphasis on how neutrino oscillations may explain (through the MSW effect) the different deficits observed in the four existing experiments. (author). 25 refs., 5 figs., 1 tab

  14. Ray tracing reconstruction investigation for C-arm tomosynthesis

    Science.gov (United States)

    Malalla, Nuhad A. Y.; Chen, Ying

    2016-04-01

    C-arm tomosynthesis is a three dimensional imaging technique. Both x-ray source and the detector are mounted on a C-arm wheeled structure to provide wide variety of movement around the object. In this paper, C-arm tomosynthesis was introduced to provide three dimensional information over a limited view angle (less than 180o) to reduce radiation exposure and examination time. Reconstruction algorithms based on ray tracing method such as ray tracing back projection (BP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were developed for C-arm tomosynthesis. C-arm tomosynthesis projection images of simulated spherical object were simulated with a virtual geometric configuration with a total view angle of 40 degrees. This study demonstrated the sharpness of in-plane reconstructed structure and effectiveness of removing out-of-plane blur for each reconstruction algorithms. Results showed the ability of ray tracing based reconstruction algorithms to provide three dimensional information with limited angle C-arm tomosynthesis.

  15. Magnetic flux reconstruction methods for shaped tokamaks

    International Nuclear Information System (INIS)

    Tsui, Chi-Wa.

    1993-12-01

    The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising

  16. An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations

    KAUST Repository

    Jeong, C.; Kallivokas, L. F.

    2016-01-01

    This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.

  17. An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations

    KAUST Repository

    Jeong, C.

    2016-07-04

    This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.

  18. Comparison between correlated sampling and the perturbation technique of MCNP5 for fixed-source problems

    International Nuclear Information System (INIS)

    He Tao; Su Bingjing

    2011-01-01

    Highlights: → The performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. → In terms of precision, the MCNP perturbation technique outperforms correlated sampling for one type of problem but performs comparably with or even under-performs correlated sampling for the other two types of problems. → In terms of accuracy, the MCNP perturbation calculations may predict inaccurate results for some of the test problems. However, the accuracy can be improved if the midpoint correction technique is used. - Abstract: Correlated sampling and the differential operator perturbation technique are two methods that enable MCNP (Monte Carlo N-Particle) to simulate small response change between an original system and a perturbed system. In this work the performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. In terms of precision of predicted response changes, the MCNP perturbation technique outperforms correlated sampling for the problem involving variation of nuclide concentrations in the same direction but performs comparably with or even underperforms correlated sampling for the other two types of problems that involve void or variation of nuclide concentrations in opposite directions. In terms of accuracy, the MCNP differential operator perturbation calculations may predict inaccurate results that deviate from the benchmarks well beyond their uncertainty ranges for some of the test problems. However, the accuracy of the MCNP differential operator perturbation can be improved if the midpoint correction technique is used.

  19. Tomographic image reconstruction using training images

    DEFF Research Database (Denmark)

    Soltani, Sara; Andersen, Martin Skovgaard; Hansen, Per Christian

    2017-01-01

    We describe and examine an algorithm for tomographic image reconstruction where prior knowledge about the solution is available in the form of training images. We first construct a non-negative dictionary based on prototype elements from the training images; this problem is formulated within...

  20. Use of an object model in three dimensional image reconstruction. Application in medical imaging

    International Nuclear Information System (INIS)

    Delageniere-Guillot, S.

    1993-02-01

    Threedimensional image reconstruction from projections corresponds to a set of techniques which give information on the inner structure of the studied object. These techniques are mainly used in medical imaging or in non destructive evaluation. Image reconstruction is an ill-posed problem. So the inversion has to be regularized. This thesis deals with the introduction of a priori information within the reconstruction algorithm. The knowledge is introduced through an object model. The proposed scheme is applied to the medical domain for cone beam geometry. We address two specific problems. First, we study the reconstruction of high contrast objects. This can be applied to bony morphology (bone/soft tissue) or to angiography (vascular structures opacified by injection of contrast agent). With noisy projections, the filtering steps of standard methods tend to smooth the natural transitions of the investigated object. In order to regularize the reconstruction but to keep contrast, we introduce a model of classes which involves the Markov random fields theory. We develop a reconstruction scheme: analytic reconstruction-reprojection. Then, we address the case of an object changing during the acquisition. This can be applied to angiography when the contrast agent is moving through the vascular tree. The problem is then stated as a dynamic reconstruction. We define an evolution AR model and we use an algebraic reconstruction method. We represent the object at a particular moment as an intermediary state between the state of the object at the beginning and at the end of the acquisition. We test both methods on simulated and real data, and we prove how the use of an a priori model can improve the results. (author)

  1. Reframing a Problem: Identifying the Sources of Conflict in a Teacher Education Course

    Science.gov (United States)

    Quebec Fuentes, Sarah; Bloom, Mark

    2017-01-01

    This article exemplifies the critical initial phase of action research, problem identification, in the context of a teacher education course. After frustration arose between preservice elementary teachers (PSTs) and their instructor over classwork quality, the instructor employed reflective journaling and discussions to examine the source of the…

  2. On artefact-free reconstruction of low-energy (30–250 eV) electron holograms

    Energy Technology Data Exchange (ETDEWEB)

    Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2014-10-15

    Low-energy electrons (30–250 eV) have been successfully employed for imaging individual biomolecules. The most simple and elegant design of a low-energy electron microscope for imaging biomolecules is a lensless setup that operates in the holographic mode. In this work we address the problem associated with the reconstruction from the recorded holograms. We discuss the twin image problem intrinsic to inline holography and the problem of the so-called biprism-like effect specific to low-energy electrons. We demonstrate how the presence of the biprism-like effect can be efficiently identified and circumvented. The presented sideband filtering reconstruction method eliminates the twin image and allows for reconstruction despite the biprism-like effect, which we demonstrate on both, simulated and experimental examples. - Highlights: • Radiation damage-free imaging of individual biomolecules. • Elimination of the twin image in inline holograms. • Circumventing biprism-like effect in low-energy electron holograms. • Artefact-free reconstructions of low-energy electron holograms.

  3. On artefact-free reconstruction of low-energy (30–250 eV) electron holograms

    International Nuclear Information System (INIS)

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2014-01-01

    Low-energy electrons (30–250 eV) have been successfully employed for imaging individual biomolecules. The most simple and elegant design of a low-energy electron microscope for imaging biomolecules is a lensless setup that operates in the holographic mode. In this work we address the problem associated with the reconstruction from the recorded holograms. We discuss the twin image problem intrinsic to inline holography and the problem of the so-called biprism-like effect specific to low-energy electrons. We demonstrate how the presence of the biprism-like effect can be efficiently identified and circumvented. The presented sideband filtering reconstruction method eliminates the twin image and allows for reconstruction despite the biprism-like effect, which we demonstrate on both, simulated and experimental examples. - Highlights: • Radiation damage-free imaging of individual biomolecules. • Elimination of the twin image in inline holograms. • Circumventing biprism-like effect in low-energy electron holograms. • Artefact-free reconstructions of low-energy electron holograms

  4. Deep Learning Techniques for Top-Quark Reconstruction

    CERN Document Server

    Naderi, Kiarash

    2017-01-01

    Top quarks are unique probes of the standard model (SM) predictions and have the potential to be a window for physics beyond the SM (BSM). Top quarks decay to a $Wb$ pair, and the $W$ can decay in leptons or jets. In a top pair event, assigning jets to their correct source is a challenge. In this study, I studied different methods for improving top reconstruction. The main motivation was to use Deep Learning Techniques in order to enhance the precision of top reconstruction.

  5. [Development and current situation of reconstruction methods following total sacrectomy].

    Science.gov (United States)

    Huang, Siyi; Ji, Tao; Guo, Wei

    2018-05-01

    To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.

  6. Strategies for source space limitation in tomographic inverse procedures

    International Nuclear Information System (INIS)

    George, J.S.; Lewis, P.S.; Schlitt, H.A.; Kaplan, L.; Gorodnitsky, I.; Wood, C.C.

    1994-01-01

    The use of magnetic recordings for localization of neural activity requires the solution of an ill-posed inverse problem: i.e. the determination of the spatial configuration, orientation, and timecourse of the currents that give rise to a particular observed field distribution. In its general form, this inverse problem has no unique solution; due to superposition and the existence of silent source configurations, a particular magnetic field distribution at the head surface could be produced by any number of possible source configurations. However, by making assumptions concerning the number and properties of neural sources, it is possible to use numerical minimization techniques to determine the source model parameters that best account for the experimental observations while satisfying numerical or physical criteria. In this paper the authors describe progress on the development and validation of inverse procedures that produce distributed estimates of neuronal currents. The goal is to produce a temporal sequence of 3-D tomographic reconstructions of the spatial patterns of neural activation. Such approaches have a number of advantages, in principle. Because they do not require estimates of model order and parameter values (beyond specification of the source space), they minimize the influence of investigator decisions and are suitable for automated analyses. These techniques also allow localization of sources that are not point-like; experimental studies of cognitive processes and of spontaneous brain activity are likely to require distributed source models

  7. Weight Multispectral Reconstruction Strategy for Enhanced Reconstruction Accuracy and Stability With Cerenkov Luminescence Tomography.

    Science.gov (United States)

    Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian

    2017-06-01

    Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.

  8. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  9. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    International Nuclear Information System (INIS)

    Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael

    2009-01-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. (topical review)

  10. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  11. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  12. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  13. Elastic cartilage reconstruction by transplantation of cultured hyaline cartilage-derived chondrocytes.

    Science.gov (United States)

    Mizuno, M; Takebe, T; Kobayashi, S; Kimura, S; Masutani, M; Lee, S; Jo, Y H; Lee, J I; Taniguchi, H

    2014-05-01

    Current surgical intervention of craniofacial defects caused by injuries or abnormalities uses reconstructive materials, such as autologous cartilage grafts. Transplantation of autologous tissues, however, places a significant invasiveness on patients, and many efforts have been made for establishing an alternative graft. Recently, we and others have shown the potential use of reconstructed elastic cartilage from ear-derived chondrocytes or progenitors with the unique elastic properties. Here, we examined the differentiation potential of canine joint cartilage-derived chondrocytes into elastic cartilage for expanding the cell sources, such as hyaline cartilage. Articular chondrocytes are isolated from canine joint, cultivated, and compared regarding characteristic differences with auricular chondrocytes, including proliferation rates, gene expression, extracellular matrix production, and cartilage reconstruction capability after transplantation. Canine articular chondrocytes proliferated less robustly than auricular chondrocytes, but there was no significant difference in the amount of sulfated glycosaminoglycan produced from redifferentiated chondrocytes. Furthermore, in vitro expanded and redifferentiated articular chondrocytes have been shown to reconstruct elastic cartilage on transplantation that has histologic characteristics distinct from hyaline cartilage. Taken together, cultured hyaline cartilage-derived chondrocytes are a possible cell source for elastic cartilage reconstruction. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  14. A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment

    Science.gov (United States)

    Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas

    2010-12-01

    In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle

  15. A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment

    Energy Technology Data Exchange (ETDEWEB)

    Mehrabian, Hatef; Samani, Abbas [Department of Electrical and Computer Engineering, University of Western Ontario, London, ON (Canada); Campbell, Gordon, E-mail: asamani@uwo.c [Department of Medical Biophysics, University of Western Ontario, London, ON (Canada)

    2010-12-21

    In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle

  16. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    Science.gov (United States)

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  17. Reconstruction and modernization of Novi Han radioactive waste repository

    International Nuclear Information System (INIS)

    Kolev, I.

    2001-01-01

    The paper is an overview of the Feasibility Study for Reconstruction and Modernisation of Novi Han Radioactive Waste Repository (NHRWR), performed by EQE Bulgaria AD in 2000 for the Institute for Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences. The Study develops a concept for overall reconstruction and modernisation of NHRWR, including rehabilitation and partial reconstruction of existing facilities and planning of new facilities for conditioning and storage of various radioactive wastes and spent sealed sources. The paper presents the general modernisation concept and outlines the proposed principle technical solutions for the existing and new facilities. (author)

  18. The coordination of problem solving strategies: when low competence sources exert more influence on task processing than high competence sources.

    Science.gov (United States)

    Quiamzade, Alain; Mugny, Gabriel; Darnon, Céline

    2009-03-01

    Previous research has shown that low competence sources, compared to highly competent sources, can exert influence in aptitudes tasks in as much as they induce people to focus on the task and to solve it more deeply. Two experiments aimed at testing the coordination between self and source's problem solving strategies as a main explanation of such a difference in influence. The influence of a low versus high competence source has been examined in an anagram task that allows for distinguishing between three response strategies, including one that corresponds to the coordination between the source's strategy and participants' own strategy. In Study 1 the strategy suggested by the source was either relevant and useful or irrelevant and useless for solving the task. Results indicated that participants used the coordination strategy in a larger extend when they had been confronted to a low competence rather than a highly competent source but only when the source displayed a strategy that was useful to solve the task. In Study 2 the source's strategy was always relevant and useful, but a decentring procedure was introduced for half of the participants. This procedure induced participants to consider other points of view than their own. Results replicated the difference observed in Study 1 when no decentring was introduced. The difference however disappeared when decentring was induced, because of an increase of the high competence source's influence. These results highlight coordination of strategies as one mechanism underlying influence from low competence sources.

  19. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2017-01-01

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data

  20. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  1. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  2. Simultaneous reconstruction of shape and generalized impedance functions in electrostatic imaging

    International Nuclear Information System (INIS)

    Cakoni, Fioralba; Hu, Yuqing; Kress, Rainer

    2014-01-01

    Determining the geometry and the physical nature of an inclusion within a conducting medium from voltage and current measurements on the accessible boundary of the medium can be modeled as an inverse boundary value problem for the Laplace equation subject to appropriate boundary conditions on the inclusion. We continue the investigations on the particular inverse problem with a generalized impedance condition started in Cakoni and Kress (2013 Inverse Problems 29 015005) by presenting an inverse algorithm for the simultaneous reconstruction of both the shape of the inclusion and the two impedance functions via a boundary integral equation approach. In addition to describing the reconstruction algorithm and illustrating its feasibility by numerical examples we also provide some extensions to the uniqueness results in Cakoni and Kress (2013 Inverse Problems 29 015005). (paper)

  3. Recent advances in iterative reconstruction for clinical SPECT/PET and CT.

    Science.gov (United States)

    Hutton, Brian F

    2011-08-01

    Statistical iterative reconstruction is now widely used in clinical practice and has contributed to significant improvement in image quality in recent years. Although primarily used for reconstruction in emission tomography (both single photon emission computed tomography (SPECT) and positron emission tomography (PET)) there is increasing interest in also applying similar algorithms to x-ray computed tomography (CT). There is increasing complexity in the factors that are included in the reconstruction, a demonstration of the versatility of the approach. Research continues with exploration of methods for further improving reconstruction quality with effective correction for various sources of artefact.

  4. An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

    KAUST Repository

    Asiri, Sharefa M.

    2013-05-25

    Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.

  5. Solving seismological problems using SGRAPH program: I-source parameters and hypocentral location

    International Nuclear Information System (INIS)

    Abdelwahed, Mohamed F.

    2012-01-01

    SGRAPH program is considered one of the seismological programs that maintain seismic data. SGRAPH is considered unique for being able to read a wide range of data formats and manipulate complementary tools in different seismological subjects in a stand-alone Windows-based application. SGRAPH efficiently performs the basic waveform analysis and solves advanced seismological problems. The graphical user interface (GUI) utilities and the Windows facilities such as, dialog boxes, menus, and toolbars simplified the user interaction with data. SGRAPH supported the common data formats like, SAC, SEED, GSE, ASCII, and Nanometrics Y-format, and others. It provides the facilities to solve many seismological problems with the built-in inversion and modeling tools. In this paper, I discuss some of the inversion tools built-in SGRAPH related to source parameters and hypocentral location estimation. Firstly, a description of the SGRAPH program is given discussing some of its features. Secondly, the inversion tools are applied to some selected events of the Dahshour earthquakes as an example of estimating the spectral and source parameters of local earthquakes. In addition, the hypocentral location of these events are estimated using the Hypoinverse 2000 program operated by SGRAPH.

  6. Reconstruction of a uniform star object from interior x-ray data: uniqueness, stability and algorithm

    International Nuclear Information System (INIS)

    Van Gompel, Gert; Batenburg, K Joost; Defrise, Michel

    2009-01-01

    In this paper we consider the problem of reconstructing a two-dimensional star-shaped object of uniform density from truncated projections of the object. In particular, we prove that such an object is uniquely determined by its parallel projections sampled over a full π angular range with a detector that only covers an interior field-of-view, even if the density of the object is not known a priori. We analyze the stability of this reconstruction problem and propose a reconstruction algorithm. Simulation experiments demonstrate that the algorithm is capable of reconstructing a star-shaped object from interior data, even if the interior region is much smaller than the size of the object. In addition, we present results for a heuristic reconstruction algorithm called DART, that was recently proposed. The heuristic method is shown to yield accurate reconstructions if the density is known in advance, and to have a very good stability in the presence of noisy projection data. Finally, the performance of the DBP and DART algorithms is illustrated for the reconstruction of real micro-CT data of a diamond

  7. Light field reconstruction robust to signal dependent noise

    Science.gov (United States)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  8. On simulated annealing phase transitions in phylogeny reconstruction.

    Science.gov (United States)

    Strobl, Maximilian A R; Barker, Daniel

    2016-08-01

    Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    Science.gov (United States)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole

  10. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Few-view image reconstruction with dual dictionaries

    International Nuclear Information System (INIS)

    Lu Yang; Zhao Jun; Wang Ge

    2012-01-01

    In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. (paper)

  12. Reconstruction of the central carbon metabolism of Aspergillus niger

    DEFF Research Database (Denmark)

    David, Helga; Åkesson, Mats Fredrik; Nielsen, Jens

    2003-01-01

    The topology of central carbon metabolism of Aspergillus niger was identified and the metabolic network reconstructed, by integrating genomic, biochemical and physiological information available for this microorganism and other related fungi. The reconstructed network may serve as a valuable...... of metabolic fluxes using metabolite balancing. This framework was employed to perform an in silico characterisation of the phenotypic behaviour of A. niger grown on different carbon sources. The effects on growth of single reaction deletions were assessed and essential biochemical reactions were identified...... for different carbon sources. Furthermore, application of the stoichiometric model for assessing the metabolic capabilities of A. niger to produce metabolites was evaluated by using succinate production as a case study....

  13. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    Science.gov (United States)

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.

    Science.gov (United States)

    Feng, Bing; Zeng, Gengsheng L

    2014-04-10

    A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.

  15. Quantum interferences reconstruction with low homodyne detection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Esposito, Martina; Randi, Francesco [Universita degli studi di Trieste, Dipartimento di Fisica, Trieste (Italy); Titimbo, Kelvin; Zimmermann, Klaus; Benatti, Fabio [Universita degli studi di Trieste, Dipartimento di Fisica, Trieste (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (Italy); Kourousias, Georgios; Curri, Alessio [Sincrotrone Trieste S.C.p.A., Trieste (Italy); Floreanini, Roberto [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (Italy); Parmigiani, Fulvio [Universita degli studi di Trieste, Dipartimento di Fisica, Trieste (Italy); Sincrotrone Trieste S.C.p.A., Trieste (Italy); University of Cologne, Institute of Physics II, Cologne (Germany); Fausti, Daniele [Universita degli studi di Trieste, Dipartimento di Fisica, Trieste (Italy); Sincrotrone Trieste S.C.p.A., Trieste (Italy)

    2016-12-15

    Optical homodyne tomography consists in reconstructing the quantum state of an optical field from repeated measurements of its amplitude at different field phases (homodyne data). The experimental noise, which unavoidably affects the homodyne data, leads to a detection efficiency η<1. The problem of reconstructing quantum states from noisy homodyne data sets prompted an intense scientific debate about the presence or absence of a lower homodyne efficiency bound (η>0.5) below which quantum features, like quantum interferences, cannot be retrieved. Here, by numerical experiments, we demonstrate that quantum interferences can be effectively reconstructed also for low homodyne detection efficiency. In particular, we address the challenging case of a Schroedinger cat state and test the minimax and adaptive Wigner function reconstruction technique by processing homodyne data distributed according to the chosen state but with an efficiency η>0.5. By numerically reproducing the Schroedinger's cat interference pattern, we give evidence that quantum state reconstruction is actually possible in these conditions, and provide a guideline for handling optical tomography based on homodyne data collected by low efficiency detectors. (orig.)

  16. Research on reconstruction of steel tube section from few projections

    International Nuclear Information System (INIS)

    Peng Shuaijun; Wu Haifeng; Wang Kai

    2007-01-01

    Most parameters of steel tube can be acquired from CT image of the section so as to evaluate its quality. But large numbers of projections are needed in order to reconstruct the section image, so the collection and calculation of the projections consume lots of time. In order to solve the problem, reconstruction algorithms of steel tube from few projections are researched and the results are validated with simulation data in the paper. Three iterative algorithms, ART, MAP and OSEM, are attempted to reconstruct the section of steel tube by using the simulation model. Considering the prior information distributing of steel tube, we improve the algorithms and get better reconstruction images. The results of simulation experiment indicate that ART, MAP and OSEM can reconstruct accurate section images of steel tube from less than 20 projections and approximate images from 10 projections. (authors)

  17. Sparsity reconstruction in electrical impedance tomography: An experimental evaluation

    KAUST Repository

    Gehre, Matthias

    2012-02-01

    We investigate the potential of sparsity constraints in the electrical impedance tomography (EIT) inverse problem of inferring the distributed conductivity based on boundary potential measurements. In sparsity reconstruction, inhomogeneities of the conductivity are a priori assumed to be sparse with respect to a certain basis. This prior information is incorporated into a Tikhonov-type functional by including a sparsity-promoting ℓ1-penalty term. The functional is minimized with an iterative soft shrinkage-type algorithm. In this paper, the feasibility of the sparsity reconstruction approach is evaluated by experimental data from water tank measurements. The reconstructions are computed both with sparsity constraints and with a more conventional smoothness regularization approach. The results verify that the adoption of ℓ1-type constraints can enhance the quality of EIT reconstructions: in most of the test cases the reconstructions with sparsity constraints are both qualitatively and quantitatively more feasible than that with the smoothness constraint. © 2011 Elsevier B.V. All rights reserved.

  18. Nonimaging aspects of follow-up in breast cancer reconstruction.

    Science.gov (United States)

    Wood, W C

    1991-09-01

    Follow-up of patients with breast cancer is directed to the early detection of recurrent or metastatic disease and the detection of new primary breast cancer. The survival benefit of early detection is limited to some patients with local failure or new primary tumors. That imaging is not used in follow-up of patients who have had breast cancer reconstruction is related to possible interference with this putative benefit by the reconstructive procedure. Such follow-up is accomplished by the patient's own surveillance, clinical examination, and laboratory testing supplemented by imaging studies. Clinical follow-up trials of women who have undergone breast reconstructive surgery show no evidence that locally recurrent breast carcinoma is masked when compared with follow-up of women who did not undergo reconstructive procedures. Reshaping of the contralateral breast to match the reconstructed breast introduces the possibility of interference with palpation as well as mammographic distortion in some women. This is an uncommon practical problem except when complicated by fat necrosis.

  19. A combined reconstruction-classification method for diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  20. [Application of Fourier transform profilometry in 3D-surface reconstruction].

    Science.gov (United States)

    Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing

    2011-08-01

    With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.

  1. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  2. Patient-adapted reconstruction and acquisition dynamic imaging method (PARADIGM) for MRI

    International Nuclear Information System (INIS)

    Aggarwal, Nitin; Bresler, Yoram

    2008-01-01

    Dynamic magnetic resonance imaging (MRI) is a challenging problem because the MR data acquisition is often not fast enough to meet the combined spatial and temporal Nyquist sampling rate requirements. Current approaches to this problem include hardware-based acceleration of the acquisition, and model-based image reconstruction techniques. In this paper we propose an alternative approach, called PARADIGM, which adapts both the acquisition and reconstruction to the spatio-temporal characteristics of the imaged object. The approach is based on time-sequential sampling theory, addressing the problem of acquiring a spatio-temporal signal under the constraint that only a limited amount of data can be acquired at a time instant. PARADIGM identifies a model class for the particular imaged object using a scout MR scan or auxiliary data. This object-adapted model is then used to optimize MR data acquisition, such that the imaging constraints are met, acquisition speed requirements are minimized, essentially perfect reconstruction of any object in the model class is guaranteed, and the inverse problem of reconstructing the dynamic object has a condition number of one. We describe spatio-temporal object models for various dynamic imaging applications including cardiac imaging. We present the theory underlying PARADIGM and analyze its performance theoretically and numerically. We also propose a practical MR imaging scheme for 2D dynamic cardiac imaging based on the theory. For this application, PARADIGM is predicted to provide a 10–25 × acceleration compared to the optimal non-adaptive scheme. Finally we present generalized optimality criteria and extend the scheme to dynamic imaging with three spatial dimensions

  3. A Novel Nipple Reconstruction Technique for Maintaining Nipple Projection: The Boomerang Flap

    Directory of Open Access Journals (Sweden)

    Young-Eun Kim

    2016-09-01

    Full Text Available Nipple-areolar complex (NAC reconstruction is the final step in the long journey of breast reconstruction for mastectomy patients. Successful NAC reconstruction depends on the use of appropriate surgical techniques that are simple and reliable. To date, numerous techniques have been used for nipple reconstruction, including contralateral nipple sharing and various local flaps. Recently, it has been common to utilize local flaps. However, the most common nipple reconstruction problem encountered with local flaps is the loss of nipple projection; there can be approximately 50% projection loss in reconstructed nipples over long-term follow-up. Several factors might contribute to nipple projection loss, and we tried to overcome these factors by performing nipple reconstructions using a boomerang flap technique, which is a modified C–V flap that utilizes the previous mastectomy scar to maintain long-term nipple projection.

  4. A Novel Nipple Reconstruction Technique for Maintaining Nipple Projection: The Boomerang Flap.

    Science.gov (United States)

    Kim, Young-Eun; Hong, Ki Yong; Minn, Kyung Won; Jin, Ung Sik

    2016-09-01

    Nipple-areolar complex (NAC) reconstruction is the final step in the long journey of breast reconstruction for mastectomy patients. Successful NAC reconstruction depends on the use of appropriate surgical techniques that are simple and reliable. To date, numerous techniques have been used for nipple reconstruction, including contralateral nipple sharing and various local flaps. Recently, it has been common to utilize local flaps. However, the most common nipple reconstruction problem encountered with local flaps is the loss of nipple projection; there can be approximately 50% projection loss in reconstructed nipples over long-term follow-up. Several factors might contribute to nipple projection loss, and we tried to overcome these factors by performing nipple reconstructions using a boomerang flap technique, which is a modified C-V flap that utilizes the previous mastectomy scar to maintain long-term nipple projection.

  5. Spectral Green’s function nodal method for multigroup SN problems with anisotropic scattering in slab-geometry non-multiplying media

    International Nuclear Information System (INIS)

    Menezes, Welton A.; Filho, Hermes Alves; Barros, Ricardo C.

    2014-01-01

    Highlights: • Fixed-source S N transport problems. • Energy multigroup model. • Anisotropic scattering. • Slab-geometry spectral nodal method. - Abstract: A generalization of the spectral Green’s function (SGF) method is developed for multigroup, fixed-source, slab-geometry discrete ordinates (S N ) problems with anisotropic scattering. The offered SGF method with the one-node block inversion (NBI) iterative scheme converges numerical solutions that are completely free from spatial truncation errors for multigroup, slab-geometry S N problems with scattering anisotropy of order L, provided L < N. As a coarse-mesh numerical method, the SGF method generates numerical solutions that generally do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. Therefore, we describe in this paper a technique for the spatial reconstruction of the coarse-mesh solution generated by the multigroup SGF method. Numerical results are given to illustrate the method’s accuracy

  6. Onlay Rib Bone Graft in Elevation of Reconstructed Auricle: 17 Years of Experience

    Directory of Open Access Journals (Sweden)

    Taehoon Kim

    2013-05-01

    Full Text Available BackgroundA cartilage wedge block and covering flap are standard procedures for firm elevation of the ear in microtia correction. However, using costal cartilage for elevation of the reconstructed auricle can be insufficient, and the fixed cartilage wedge block may be absorbed or may slip out. Furthermore, elevating covering flaps is time-consuming and uses up fascia, a potential source of reconstruction material. Therefore, we propose an innovative method using autologous onlay rib bone graft for auricular elevation of microtia.MethodsFrom February 1995 to August 2012, 77 patients received a first stage operation with a rib cartilage framework graft. In the second stage operation, a small full thickness of rib bone was harvested through the previous donor scar. The bihalved rib bone was inserted into the subperiosteal pocket beneath the cartilage framework.ResultsThe follow-up time ranged from 1 month to 17 years, with a mean of 3 years. All of the patients sustained the elevation of their ears very well during the follow-up period. Donor site problems, except for hypertrophic scars, were not observed. Surgery-related complications, specifically skin necrosis, infection, or hematoma, occurred in 4 cases.ConclusionsOnlay rib bone graft used to elevate the reconstructed auricle is a more anatomically appropriate material than cartilage, due to the bone-to-bone contact between the bone graft and the temporal bone. Postoperative minor correction of the elevation degree is straightforward and the skin graft survives better. Therefore, reconstructed auricle elevation using onlay rib bone graft is a useful and valuable method.

  7. Analysis of PWR control rod ejection accident with the coupled code system SKETCH-INS/TRACE by incorporating pin power reconstruction model

    International Nuclear Information System (INIS)

    Nakajima, T.; Sakai, T.

    2010-01-01

    The pin power reconstruction model was incorporated in the 3-D nodal kinetics code SKETCH-INS in order to produce accurate calculation of three-dimensional pin power distributions throughout the reactor core. In order to verify the employed pin power reconstruction model, the PWR MOX/UO_2 core transient benchmark problem was analyzed with the coupled code system SKETCH-INS/TRACE by incorporating the model and the influence of pin power reconstruction model was studied. SKETCH-INS pin power distributions for 3 benchmark problems were compared with the PARCS solutions which were provided by the host organisation of the benchmark. SKETCH-INS results were in good agreement with the PARCS results. The capability of employed pin power reconstruction model was confirmed through the analysis of benchmark problems. A PWR control rod ejection benchmark problem was analyzed with the coupled code system SKETCH-INS/ TRACE by incorporating the pin power reconstruction model. The influence of pin power reconstruction model was studied by comparing with the result of conventional node averaged flux model. The results indicate that the pin power reconstruction model has significant effect on the pin powers during transient and hence on the fuel enthalpy

  8. Reconstructible phylogenetic networks: do not distinguish the indistinguishable.

    Science.gov (United States)

    Pardi, Fabio; Scornavacca, Celine

    2015-04-01

    Phylogenetic networks represent the evolution of organisms that have undergone reticulate events, such as recombination, hybrid speciation or lateral gene transfer. An important way to interpret a phylogenetic network is in terms of the trees it displays, which represent all the possible histories of the characters carried by the organisms in the network. Interestingly, however, different networks may display exactly the same set of trees, an observation that poses a problem for network reconstruction: from the perspective of many inference methods such networks are "indistinguishable". This is true for all methods that evaluate a phylogenetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisations of maximum parsimony and maximum likelihood for networks. This identifiability problem is partially solved by accounting for branch lengths, although this merely reduces the frequency of the problem. Here we propose that network inference methods should only attempt to reconstruct what they can uniquely identify. To this end, we introduce a novel definition of what constitutes a uniquely reconstructible network. For any given set of indistinguishable networks, we define a canonical network that, under mild assumptions, is unique and thus representative of the entire set. Given data that underwent reticulate evolution, only the canonical form of the underlying phylogenetic network can be uniquely reconstructed. While on the methodological side this will imply a drastic reduction of the solution space in network inference, for the study of reticulate evolution this is a fundamental limitation that will require an important change of perspective when interpreting phylogenetic networks.

  9. Reconstructible phylogenetic networks: do not distinguish the indistinguishable.

    Directory of Open Access Journals (Sweden)

    Fabio Pardi

    2015-04-01

    Full Text Available Phylogenetic networks represent the evolution of organisms that have undergone reticulate events, such as recombination, hybrid speciation or lateral gene transfer. An important way to interpret a phylogenetic network is in terms of the trees it displays, which represent all the possible histories of the characters carried by the organisms in the network. Interestingly, however, different networks may display exactly the same set of trees, an observation that poses a problem for network reconstruction: from the perspective of many inference methods such networks are "indistinguishable". This is true for all methods that evaluate a phylogenetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisations of maximum parsimony and maximum likelihood for networks. This identifiability problem is partially solved by accounting for branch lengths, although this merely reduces the frequency of the problem. Here we propose that network inference methods should only attempt to reconstruct what they can uniquely identify. To this end, we introduce a novel definition of what constitutes a uniquely reconstructible network. For any given set of indistinguishable networks, we define a canonical network that, under mild assumptions, is unique and thus representative of the entire set. Given data that underwent reticulate evolution, only the canonical form of the underlying phylogenetic network can be uniquely reconstructed. While on the methodological side this will imply a drastic reduction of the solution space in network inference, for the study of reticulate evolution this is a fundamental limitation that will require an important change of perspective when interpreting phylogenetic networks.

  10. Multi-level damage identification with response reconstruction

    Science.gov (United States)

    Zhang, Chao-Dong; Xu, You-Lin

    2017-10-01

    Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.

  11. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  12. Explicit formulation of a nodal transport method for discrete ordinates calculations in two-dimensional fixed-source problems

    Energy Technology Data Exchange (ETDEWEB)

    Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Prolo Filho, Joao Francisco [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica, Estatistica e Fisica; Dias da Cunha, Rudnei; Basso Barichello, Liliane [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica

    2014-04-15

    In this work a study of two-dimensional fixed-source neutron transport problems, in Cartesian geometry, is reported. The approach reduces the complexity of the multidimensional problem using a combination of nodal schemes and the Analytical Discrete Ordinates Method (ADO). The unknown leakage terms on the boundaries that appear from the use of the derivation of the nodal scheme are incorporated to the problem source term, such as to couple the one-dimensional integrated solutions, made explicit in terms of the x and y spatial variables. The formulation leads to a considerable reduction of the order of the associated eigenvalue problems when combined with the usual symmetric quadratures, thereby providing solutions that have a higher degree of computational efficiency. Reflective-type boundary conditions are introduced to represent the domain on a simpler form than that previously considered in connection with the ADO method. Numerical results obtained with the technique are provided and compared to those present in the literature. (orig.)

  13. THE ALL-SOURCE GREEN’S FUNCTION AND ITS APPLICATIONS TO TSUNAMI PROBLEMS

    Directory of Open Access Journals (Sweden)

    ZHIGANG XU

    2007-01-01

    Full Text Available The classical Green’s function provides the global linear response to impulse forcing at a particular source location. It is a type of one-source-all-receiver Green’s function. This paper presents a new type of Green’s function, referred to as the all-source-one-receiver, or for short the all-source Green’s function (ASGF, in which the solution at a point of interest (POI can be written in terms of global forcing without requiring the solution at other locations. The ASGF is particularly applicable to tsunami problems. The response to forcing anywhere in the global ocean can be determined within a few seconds on an ordinary personal computer or on a web server. The ASGF also brings in two new types of tsunami charts, one for the arrival time and the second for the gain, without assuming the location of the epicenter or reversibility of the tsunami travel path. Thus it provides a useful tool for tsunami hazard preparedness and to rapidly calculate the real-time responses at selected POIs for a tsunami generated anywhere in the world’s oceans.

  14. The Use of Original Sources and Its Potential Relation to the Recruitment Problem

    Science.gov (United States)

    Jankvist, Uffe Thomas

    2014-01-01

    Based on a study about using original sources with Danish upper secondary students, the paper addresses the potential outcome of such an approach in regard to the so-called recruitment problem to the mathematical sciences. 24 students were exposed to questionnaire questions and 16 of these to follow-up interviews, which form the basis for both a…

  15. The free vascularized flap and the flap plate options: comparative results of reconstruction of lateral mandibular defects

    NARCIS (Netherlands)

    Shpitzer, T.; Gullane, P. J.; Neligan, P. C.; Irish, J. C.; Freeman, J. E.; van den Brekel, M.; Gur, E.

    2000-01-01

    OBJECTIVES/HYPOTHESIS: Reconstruction of the mandible and oral cavity after segmental resection is a challenging surgical problem. Although osteocutaneous free flaps are generally accepted to be optimal for reconstruction of anterior defects, the need for bony reconstruction for a pure lateral

  16. Post-Earthquake Reconstruction — in Context of Housing

    Science.gov (United States)

    Sarkar, Raju

    Comprehensive rescue and relief operations are always launched with no loss of time with active participation of the Army, Governmental agencies, Donor agencies, NGOs, and other Voluntary organizations after each Natural Disaster. There are several natural disasters occurring throughout the world round the year and one of them is Earthquake. More than any other natural catastrophe, an earthquake represents the undoing of our most basic pre-conceptions of the earth as the source of stability or the first distressing factor due to earthquake is the collapse of our dwelling units. Earthquake has affected buildings since people began constructing them. So after each earthquake a reconstruction of housing program is very much essential since housing is referred to as shelter satisfying one of the so-called basic needs next to food and clothing. It is a well-known fact that resettlement (after an earthquake) is often accompanied by the creation of ghettos and ensuing problems in the provision of infrastructure and employment. In fact a housing project after Bhuj earthquake in Gujarat, India, illustrates all the negative aspects of resettlement in the context of reconstruction. The main theme of this paper is to consider few issues associated with post-earthquake reconstruction in context of housing, all of which are significant to communities that have had to rebuild after catastrophe or that will face such a need in the future. Few of them are as follows: (1) Why rebuilding opportunities are time consuming? (2) What are the causes of failure in post-earthquake resettlement? (3) How can holistic planning after an earthquake be planned? (4) What are the criteria to be checked for sustainable building materials? (5) What are the criteria for success in post-earthquake resettlement? (6) How mitigation in post-earthquake housing can be made using appropriate repair, restoration, and strengthening concepts?

  17. Reconstruction of Undersampled Atomic Force Microscopy Images

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Arildsen, Thomas; Østergaard, Jan

    2013-01-01

    Atomic force microscopy (AFM) is one of the most advanced tools for high-resolution imaging and manipulation of nanoscale matter. Unfortunately, standard AFM imaging requires a timescale on the order of seconds to minutes to acquire an image which makes it complicated to observe dynamic processes....... Moreover, it is often required to take several images before a relevant observation region is identified. In this paper we show how to significantly reduce the image acquisition time by undersampling. The reconstruction of an undersampled AFM image can be viewed as an inpainting, interpolating problem...... should be reconstructed using interpolation....

  18. Amplitude-based data selection for optimal retrospective reconstruction in micro-SPECT

    Science.gov (United States)

    Breuilly, M.; Malandain, G.; Guglielmi, J.; Marsault, R.; Pourcher, T.; Franken, P. R.; Darcourt, J.

    2013-04-01

    Respiratory motion can blur the tomographic reconstruction of positron emission tomography or single-photon emission computed tomography (SPECT) images, which subsequently impair quantitative measurements, e.g. in the upper abdomen area. Respiratory signal phase-based gated reconstruction addresses this problem, but deteriorates the signal-to-noise ratio (SNR) and other intensity-based quality measures. This paper proposes a 3D reconstruction method dedicated to micro-SPECT imaging of mice. From a 4D acquisition, the phase images exhibiting motion are identified and the associated list-mode data are discarded, which enables the reconstruction of a 3D image without respiratory artefacts. The proposed method allows a motion-free reconstruction exhibiting both satisfactory count statistics and accuracy of measures. With respect to standard 3D reconstruction (non-gated 3D reconstruction) without breathing motion correction, an increase of 14.6% of the mean standardized uptake value has been observed, while, with respect to a gated 4D reconstruction, up to 60% less noise and an increase of up to 124% of the SNR have been demonstrated.

  19. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    Science.gov (United States)

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  20. Pathway reconstruction of airway remodeling in chronic lung diseases: a systems biology approach.

    Directory of Open Access Journals (Sweden)

    Ali Najafi

    Full Text Available Airway remodeling is a pathophysiologic process at the clinical, cellular, and molecular level relating to chronic obstructive airway diseases such as chronic obstructive pulmonary disease (COPD, asthma and mustard lung. These diseases are associated with the dysregulation of multiple molecular pathways in the airway cells. Little progress has so far been made in discovering the molecular causes of complex disease in a holistic systems manner. Therefore, pathway and network reconstruction is an essential part of a systems biology approach to solve this challenging problem. In this paper, multiple data sources were used to construct the molecular process of airway remodeling pathway in mustard lung as a model of airway disease. We first compiled a master list of genes that change with airway remodeling in the mustard lung disease and then reconstructed the pathway by generating and merging the protein-protein interaction and the gene regulatory networks. Experimental observations and literature mining were used to identify and validate the master list. The outcome of this paper can provide valuable information about closely related chronic obstructive airway diseases which are of great importance for biologists and their future research. Reconstructing the airway remodeling interactome provides a starting point and reference for the future experimental study of mustard lung, and further analysis and development of these maps will be critical to understanding airway diseases in patients.

  1. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction.

    Science.gov (United States)

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O; Ellis, Heidi J C; Gryk, Michael R

    2015-07-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses.

  2. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert [UConn Health, Department of Molecular Biology and Biophysics (United States); Martyn, Timothy O. [Rensselaer at Hartford, Department of Engineering and Science (United States); Ellis, Heidi J. C. [Western New England College, Department of Computer Science and Information Technology (United States); Gryk, Michael R., E-mail: gryk@uchc.edu [UConn Health, Department of Molecular Biology and Biophysics (United States)

    2015-07-15

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses.

  3. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction

    International Nuclear Information System (INIS)

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O.; Ellis, Heidi J. C.; Gryk, Michael R.

    2015-01-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses

  4. Kuwaiti reconstruction project unprecedented in size, complexity

    Energy Technology Data Exchange (ETDEWEB)

    Tippee, B.

    1993-03-15

    There had been no challenge like it: a desert emirate ablaze; its main city sacked; the economically crucial oil industry devastated; countryside shrouded in smoke from oil well fires and littered with unexploded ordnance, disabled military equipment, and unignited crude oil. Like the well-documented effort that brought 749 burning wells under control in less than 7 months, Kuwaiti reconstruction had no precedent. Unlike the firefight, reconstruction is no-where complete. It nevertheless has placed two of three refineries back on stream, restored oil production to preinvasion levels, and repaired or rebuilt 17 of 26 oil field gathering stations. Most of the progress has come since the last well fire went out on Nov. 6, 1991. Expatriates in Kuwait since the days of Al-Awda- the return,' in Arabic- attribute much of the rapid progress under Al-Tameer- the reconstruction'- to decisions and preparations made while the well fires still raged. The article describes the planning for Al-Awda, reentering the country, drilling plans, facilities reconstruction, and special problems.

  5. Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation

    Science.gov (United States)

    Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas

    2018-02-01

    Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.

  6. Heuristic optimization in penumbral image for high resolution reconstructed image

    International Nuclear Information System (INIS)

    Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.

    2010-01-01

    Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.

  7. Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool.

    Science.gov (United States)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-07

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  8. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  9. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data

    International Nuclear Information System (INIS)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-01-01

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy. (paper)

  10. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    Science.gov (United States)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  11. Robust framework for PET image reconstruction incorporating system and measurement uncertainties.

    Directory of Open Access Journals (Sweden)

    Huafeng Liu

    Full Text Available In Positron Emission Tomography (PET, an optimal estimate of the radioactivity concentration is obtained from the measured emission data under certain criteria. So far, all the well-known statistical reconstruction algorithms require exactly known system probability matrix a priori, and the quality of such system model largely determines the quality of the reconstructed images. In this paper, we propose an algorithm for PET image reconstruction for the real world case where the PET system model is subject to uncertainties. The method counts PET reconstruction as a regularization problem and the image estimation is achieved by means of an uncertainty weighted least squares framework. The performance of our work is evaluated with the Shepp-Logan simulated and real phantom data, which demonstrates significant improvements in image quality over the least squares reconstruction efforts.

  12. Reconstructed coronal views of CT and isotopic images of the pancreas

    International Nuclear Information System (INIS)

    Kasuga, Toshio; Kobayashi, Toshio; Nakanishi, Fumiko

    1980-01-01

    To compare functional images of the pancreas by scintigraphy with morphological views of the pancreas by CT, CT coronal views of the pancreas were reconstructed. As CT coronal views were reconstructed from the routine scanning, there was a problem in longitudinal spatial resolution. However, almost satisfactory total images of the pancreas were obtained by improving images adequately. In 27 patients whose diseases had been confirmed, it was easy to compare pancreatic scintigrams with pancreatic CT images by using reconstructed CT coronal views, and information which had not been obtained by original CT images could be obtained by using reconstructed CT coronal views. Especially, defects on pancreatic images and the shape of pancreas which had not been visualized clearly by scintigraphy alone could be visualized by using reconstructed CT coronal views of the pancreas. (Tsunoda, M.)

  13. Anterior cruciate ligament reconstruction with a novel porcine xenograft: the initial Italian experience

    Science.gov (United States)

    ZAFFAGNINI, STEFANO; GRASSI, ALBERTO; MUCCIOLI, GIULIO MARIA MARCHEGGIANI; DI SARSINA, TOMMASO ROBERTI; RAGGI, FEDERICO; BENZI, ANDREA; MARCACCI, MAURILIO

    2015-01-01

    At the current state of the art in anterior cruciate ligament (ACL) reconstruction, multiple techniques have been presented but none has given clearly defined and improved results. One of the main issues concerns the choice of graft. The concept of using xenograft tissue, defined as a graft tissue from one species and destined for implantation in an unlike species, was introduced in order to try to overcome the mechanical and biological concerns associated with synthetic materials and the safety and quality concerns and availability problems of allograft tissue. Xenograft tissue carries the risk of producing an immunological reaction. In order to try to overcome or attenuate the immune response against porcine xenograft tissue, the Z-Process® (Aperion Biologics Inc, San Antonio, Texas, USA) has been developed and used to produce the Z-Lig® family of devices for ACL reconstruction procedures. Z-Lig® is a tendon graft with or without bone blocks, sourced from animal tissue in a manner consistent with what has normally been sourced from human tissue, and processed to overcome anti-Gal-mediated rejection and to attenuate other immunological recognition in humans. All this while ensuring sterility, viral inactivation and preservation of mechanical proprieties appropriate for an ACL reconstruction device. The Z-Lig® device has been tested in skeletally mature monkeys and given interesting and promising results from the preclinical performance and safety profile point of view. On this basis, it was possible to proceed with the first clinical trial involving humans, which gave similar encouraging results. The Z-Lig® device has also been implanted in Italy at the Rizzoli Orthopaedic Institute in Bologna, as a part of international multicenter prospective randomized blinded controlled study aimed at comparing xenograft with allograft tissue. PMID:26605257

  14. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Frommert, Mona

    2011-01-01

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.

  15. [Current problems in the reconstructive surgery of the locomotor apparatus in children].

    Science.gov (United States)

    Kupatadze, D D; Nabokov, V V; Malikov, S A; Polozov, R N; Kanina, L Ia; Veselov, A G

    1997-01-01

    The authors analyze results of treatment of 778 children with malignant and benign tumors of the bones, pseudoarthroses, amputations of lower extremities and fingers, injuries of the tendons, vessels and contused-lacerated wounds of distal phalanges of fingers. The possibility to use a precision technique for the reconstructive operations of the vessels in children is shown.

  16. Analysis of reproducibility of the single photon tomography reconstruction by the method of singular value decomposition

    International Nuclear Information System (INIS)

    Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.

    1997-01-01

    We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction

  17. ACTS: from ATLAS software towards a common track reconstruction software

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00349786; The ATLAS collaboration; Salzburger, Andreas; Kiehn, Moritz; Hrdinka, Julia; Calace, Noemi

    2017-01-01

    Reconstruction of charged particles' trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is de...

  18. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  19. The Neuroelectromagnetic Inverse Problem and the Zero Dipole Localization Error

    Directory of Open Access Journals (Sweden)

    Rolando Grave de Peralta

    2009-01-01

    Full Text Available A tomography of neural sources could be constructed from EEG/MEG recordings once the neuroelectromagnetic inverse problem (NIP is solved. Unfortunately the NIP lacks a unique solution and therefore additional constraints are needed to achieve uniqueness. Researchers are then confronted with the dilemma of choosing one solution on the basis of the advantages publicized by their authors. This study aims to help researchers to better guide their choices by clarifying what is hidden behind inverse solutions oversold by their apparently optimal properties to localize single sources. Here, we introduce an inverse solution (ANA attaining perfect localization of single sources to illustrate how spurious sources emerge and destroy the reconstruction of simultaneously active sources. Although ANA is probably the simplest and robust alternative for data generated by a single dominant source plus noise, the main contribution of this manuscript is to show that zero localization error of single sources is a trivial and largely uninformative property unable to predict the performance of an inverse solution in presence of simultaneously active sources. We recommend as the most logical strategy for solving the NIP the incorporation of sound additional a priori information about neural generators that supplements the information contained in the data.

  20. The Mathematical Basis of the Inverse Scattering Problem for Cracks from Near-Field Data

    Directory of Open Access Journals (Sweden)

    Yao Mao

    2015-01-01

    Full Text Available We consider the acoustic scattering problem from a crack which has Dirichlet boundary condition on one side and impedance boundary condition on the other side. The inverse scattering problem in this paper tries to determine the shape of the crack and the surface impedance coefficient from the near-field measurements of the scattered waves, while the source point is placed on a closed curve. We firstly establish a near-field operator and focus on the operator’s mathematical analysis. Secondly, we obtain a uniqueness theorem for the shape and surface impedance. Finally, by using the operator’s properties and modified linear sampling method, we reconstruct the shape and surface impedance.

  1. Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.

    Science.gov (United States)

    Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M

    2014-02-10

    Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.

  2. On the Complexity of Reconstructing Chemical Reaction Networks

    DEFF Research Database (Denmark)

    Fagerberg, Rolf; Flamm, Christoph; Merkle, Daniel

    2013-01-01

    The analysis of the structure of chemical reaction networks is crucial for a better understanding of chemical processes. Such networks are well described as hypergraphs. However, due to the available methods, analyses regarding network properties are typically made on standard graphs derived from...... the full hypergraph description, e.g. on the so-called species and reaction graphs. However, a reconstruction of the underlying hypergraph from these graphs is not necessarily unique. In this paper, we address the problem of reconstructing a hypergraph from its species and reaction graph and show NP...

  3. Solution of the multigroup neutron diffusion Eigenvalue problem in slab geometry by modified power method

    Energy Technology Data Exchange (ETDEWEB)

    Zanette, Rodrigo [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pós-Graduação em Matemática Aplicada; Petersen, Claudio Z.; Tavares, Matheus G., E-mail: rodrigozanette@hotmail.com, E-mail: claudiopetersen@yahoo.com.br, E-mail: matheus.gulartetavares@gmail.com [Universidade Federal de Pelotas (UFPEL), RS (Brazil). Programa de Pós-Graduação em Modelagem Matemática

    2017-07-01

    We describe in this work the application of the modified power method for solve the multigroup neutron diffusion eigenvalue problem in slab geometry considering two-dimensions for nuclear reactor global calculations. It is well known that criticality calculations can often be best approached by solving eigenvalue problems. The criticality in nuclear reactors physics plays a relevant role since establishes the ratio between the numbers of neutrons generated in successive fission reactions. In order to solve the eigenvalue problem, a modified power method is used to obtain the dominant eigenvalue (effective multiplication factor (K{sub eff})) and its corresponding eigenfunction (scalar neutron flux), which is non-negative in every domain, that is, physically relevant. The innovation of this work is solving the neutron diffusion equation in analytical form for each new iteration of the power method. For solve this problem we propose to apply the Finite Fourier Sine Transform on one of the spatial variables obtaining a transformed problem which is resolved by well-established methods for ordinary differential equations. The inverse Fourier transform is used to reconstruct the solution for the original problem. It is known that the power method is an iterative source method in which is updated by the neutron flux expression of previous iteration. Thus, for each new iteration, the neutron flux expression becomes larger and more complex due to analytical solution what makes propose that it be reconstructed through an polynomial interpolation. The methodology is implemented to solve a homogeneous problem and the results are compared with works presents in the literature. (author)

  4. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  5. Reconstruction of Atmospheric Tracer Releases with Optimal Resolution Features: Concentration Data Assimilation

    Science.gov (United States)

    Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali

    2015-04-01

    The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight

  6. Image Reconstruction of Metal Pipe in Electrical Resistance Tomography

    Directory of Open Access Journals (Sweden)

    Suzanna RIDZUAN AW

    2017-02-01

    Full Text Available This paper demonstrates a Linear Back Projection (LBP algorithm based on the reconstruction of conductivity distributions to identify different sizes and locations of bubble phantoms in a metal pipe. Both forward and inverse problems are discussed. Reconstructed images of the phantoms under test conditions are presented. From the results, it was justified that the sensitivity maps of the conducting boundary strategy can be applied successfully in identifying the location for the phantom of interest using LBP algorithm. Additionally, the number and spatial distribution of the bubble phantoms can be clearly distinguished at any location in the pipeline. It was also shown that the reconstructed images agree well with the bubble phantoms.

  7. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  8. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object...

  9. An Lq–Lp optimization framework for image reconstruction of electrical resistance tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Dong, Feng

    2014-01-01

    Image reconstruction in electrical resistance tomography (ERT) is an ill-posed and nonlinear problem, which is easily affected by measurement noise. The regularization method with L 2 constraint term or L 1 constraint term is often used to solve the inverse problem of ERT. It shows that the reconstruction method with L 2 regularization puts smoothness to obtain stability in the image reconstruction process, which is blurry at the interface of different conductivities. The regularization method with L 1 norm is powerful at dealing with the over-smoothing effects, which is beneficial in obtaining a sharp transaction in conductivity distribution. To find the reason for these effects, an L q –L p optimization framework (1 ⩽ q ⩽ 2, 1 ⩽ p ⩽ 2) for the image reconstruction of ERT is presented in this paper. The L q –L p optimization framework is solved based on an approximation handling with Gauss–Newton iteration algorithm. The optimization framework is tested for image reconstruction of ERT with different models and the effects of the L p regularization term on the quality of the reconstructed images are discussed with both simulation and experiment. By comparing the reconstructed results with different p in the regularization term, it is found that a large penalty is implemented on small data in the solution when p is small and a lesser penalty is implemented on small data in the solution when p is larger. It also makes the reconstructed images smoother and more easily affected by noise when p is larger. (paper)

  10. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  11. Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction

    Science.gov (United States)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2017-07-01

    The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.

  12. Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction

    International Nuclear Information System (INIS)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2017-01-01

    The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg–Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited. (paper)

  13. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    Science.gov (United States)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  14. Three-dimensional imagery by encoding sources of X rays

    International Nuclear Information System (INIS)

    Magnin, Isabelle

    1987-01-01

    This research thesis addresses the theoretical and practical study of X ray coded sources, and thus notably aims at exploring whether it would be possible to transform a standard digital radiography apparatus (as those operated in radiology hospital departments) into a low cost three-dimensional imagery system. The author first recalls the principle of conventional tomography and improvement attempts, and describes imagery techniques based on the use of encoding openings and source encoding. She reports the modelling of an imagery system based on encoded sources of X ray, and addresses the original notion of three-dimensional response for such a system. The author then addresses the reconstruction method by considering the reconstruction of a plane object, of a multi-plane object, and of real three-dimensional object. The frequency properties and the tomographic capacities of various types of source codes are analysed. She describes a prototype tomography apparatus, and presents and discusses three-dimensional actual phantom reconstructions. She finally introduces a new principle of dynamic three-dimensional radiography which implements an acquisition technique by 'gating code'. The acquisition principle should allow the reconstruction of volumes animated by periodic deformations, such as the heart for example [fr

  15. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    Science.gov (United States)

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  16. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  17. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  18. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  19. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    Science.gov (United States)

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  20. Non-stationary reconstruction for dynamic fluorescence molecular tomography with extended kalman filter.

    Science.gov (United States)

    Liu, Xin; Wang, Hongkai; Yan, Zhuangzhi

    2016-11-01

    Dynamic fluorescence molecular tomography (FMT) plays an important role in drug delivery research. However, the majority of current reconstruction methods focus on solving the stationary FMT problems. If the stationary reconstruction methods are applied to the time-varying fluorescence measurements, the reconstructed results may suffer from a high level of artifacts. In addition, based on the stationary methods, only one tomographic image can be obtained after scanning one circle projection data. As a result, the movement of fluorophore in imaged object may not be detected due to the relative long data acquisition time (typically >1 min). In this paper, we apply extended kalman filter (EKF) technique to solve the non-stationary fluorescence tomography problem. Especially, to improve the EKF reconstruction performance, the generalized inverse of kalman gain is calculated by a second-order iterative method. The numerical simulation, phantom, and in vivo experiments are performed to evaluate the performance of the method. The experimental results indicate that by using the proposed EKF-based second-order iterative (EKF-SOI) method, we cannot only clearly resolve the time-varying distributions of fluorophore within imaged object, but also greatly improve the reconstruction time resolution (~2.5 sec/frame) which makes it possible to detect the movement of fluorophore during the imaging processes.