DEFF Research Database (Denmark)
Senturk, O.S.; Hava, A.M.
2011-01-01
This paper proposes the waveform reconstruction method (WRM), which is utilized in the single-phase series active filter's (SAF's) control algorithm, in order to extract the load harmonic voltage component of voltage harmonic type single-phase diode rectifier loads. Employing WRM and the line...... current sampling delay reduction method, a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous-reference-frame-based methods. The analytical, simulation, and experimental studies of a 2.5 k...
International Nuclear Information System (INIS)
Hu, Xiao-feng; Chen, Xiang; Wei, Ming
2013-01-01
Shielding effectiveness (SE) of materials of current testing standards is often carried out by using continuous-wave measurement and amplitude-frequency characteristics curve is used to characterize the results. However, with in-depth study of high-power electromagnetic pulse (EMP) interference, it was discovered that only by frequency-domain SE of materials cannot be completely characterized by shielding performance of time-domain pulsed-field. And there is no uniform testing methods and standards of SE of materials against EMP. In this paper, the method of minimum phase transfer function is used to reconstruct shielded time-domain waveform based on the analysis of the waveform reconstruction method. Pulse of plane waves through an infinite planar material is simulated by using CST simulation software. The reconstructed waveform and simulation waveform is compared. The results show that the waveform reconstruction method based on the minimum phase can be well estimated EMP waveform through the infinite planar materials.
DEFF Research Database (Denmark)
Senturk, Osman Selcuk; Hava, Ahmet M.
2009-01-01
current sampling delay reduction method (SDRM), a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous reference frame based methods. The analytical, simulation, and experimental studies of a 2...
Density reconstruction in multiparameter elastic full-waveform inversion
Sun, Min'ao; Yang, Jizhong; Dong, Liangguo; Liu, Yuzhu; Huang, Chao
2017-12-01
Elastic full-waveform inversion (EFWI) is a quantitative data fitting procedure that recovers multiple subsurface parameters from multicomponent seismic data. As density is involved in addition to P- and S-wave velocities, the multiparameter EFWI suffers from more serious tradeoffs. In addition, compared with P- and S-wave velocities, the misfit function is less sensitive to density perturbation. Thus, a robust density reconstruction remains a difficult problem in multiparameter EFWI. In this paper, we develop an improved scattering-integral-based truncated Gauss-Newton method to simultaneously recover P- and S-wave velocities and density in EFWI. In this method, the inverse Gauss-Newton Hessian has been estimated by iteratively solving the Gauss-Newton equation with a matrix-free conjugate gradient algorithm. Therefore, it is able to properly handle the parameter tradeoffs. To give a detailed illustration of the tradeoffs between P- and S-wave velocities and density in EFWI, wavefield-separated sensitivity kernels and the Gauss-Newton Hessian are numerically computed, and their distribution characteristics are analyzed. Numerical experiments on a canonical inclusion model and a modified SEG/EAGE Overthrust model have demonstrated that the proposed method can effectively mitigate the tradeoff effects, and improve multiparameter gradients. Thus, a high convergence rate and an accurate density reconstruction can be achieved.
Viscoelastic property identification from waveform reconstruction
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-05-01
An inverse method is proposed for the determination of the viscoelastic properties of material plates from the plane-wave transmitted acoustic field. Innovations lie in a two-step inversion scheme based on the well-known maximum-likelihood principle with an analytic signal formulation. In addition, establishing the analytical formulations of the plate transmission coefficient we implement an efficient and slightly noise-sensitive process suited to both very thin plates and strongly dispersive media.
Waveform relaxation methods for implicit differential equations
P.J. van der Houwen; W.A. van der Veen
1996-01-01
textabstractWe apply a Runge-Kutta-based waveform relaxation method to initial-value problems for implicit differential equations. In the implementation of such methods, a sequence of nonlinear systems has to be solved iteratively in each step of the integration process. The size of these systems
Photonic arbitrary waveform generator based on Taylor synthesis method
DEFF Research Database (Denmark)
Liao, Shasha; Ding, Yunhong; Dong, Jianji
2016-01-01
Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...
Identification of complex stiffness tensor from waveform reconstruction
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-03-01
An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.
Efficient data retrieval method for similar plasma waveforms in EAST
Energy Technology Data Exchange (ETDEWEB)
Liu, Ying, E-mail: liuying-ipp@szu.edu.cn [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Huang, Jianjun; Zhou, Huasheng; Wang, Fan [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Wang, Feng [Institute of Plasma Physics Chinese Academy of Sciences, Hefei 230031 (China)
2016-11-15
Highlights: • The proposed method is carried out by means of bounding envelope and angle distance. • It allows retrieving for whole similar waveforms of any time length. • In addition, the proposed method is also possible to retrieve subsequences. - Abstract: Fusion research relies highly on data analysis due to its massive-sized database. In the present work, we propose an efficient method for searching and retrieving similar plasma waveforms in Experimental Advanced Superconducting Tokamak (EAST). Based on Piecewise Linear Aggregate Approximation (PLAA) for extracting feature values, the searching process is accomplished in two steps. The first one is coarse searching to narrow down the search space, which is carried out by means of bounding envelope. The second step is fine searching to retrieval similar waveforms, which is implemented by the angle distance. The proposed method is tested in EAST databases and turns out to have good performance in retrieving similar waveforms.
Designing waveforms for temporal encoding using a frequency sampling method
DEFF Research Database (Denmark)
Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...
Optimal overlapping of waveform relaxation method for linear differential equations
International Nuclear Information System (INIS)
Yamada, Susumu; Ozawa, Kazufumi
2000-01-01
Waveform relaxation (WR) method is extremely suitable for solving large systems of ordinary differential equations (ODEs) on parallel computers, but the convergence of the method is generally slow. In order to accelerate the convergence, the methods which decouple the system into many subsystems with overlaps some of the components between the adjacent subsystems have been proposed. The methods, in general, converge much faster than the ones without overlapping, but the computational cost per iteration becomes larger due to the increase of the dimension of each subsystem. In this research, the convergence of the WR method for solving constant coefficients linear ODEs is investigated and the strategy to determine the number of overlapped components which minimizes the cost of the parallel computations is proposed. Numerical experiments on an SR2201 parallel computer show that the estimated number of the overlapped components by the proposed strategy is reasonable. (author)
[Reconstructive methods after Fournier gangrene].
Wallner, C; Behr, B; Ring, A; Mikhail, B D; Lehnhardt, M; Daigeler, A
2016-04-01
Fournier's gangrene is a variant of the necrotizing fasciitis restricted to the perineal and genital region. It presents as an acute life-threatening disease and demands rapid surgical debridement, resulting in large soft tissue defects. Various reconstructive methods have to be applied to reconstitute functionality and aesthetics. The objective of this work is to identify different reconstructive methods in the literature and compare them to our current concepts for reconstructing defects caused by Fournier gangrene. Analysis of the current literature and our reconstructive methods on Fournier gangrene. The Fournier gangrene is an emergency requiring rapid, calculated antibiotic treatment and radical surgical debridement. After the acute phase of the disease, appropriate reconstructive methods are indicated. The planning of the reconstruction of the defect depends on many factors, especially functional and aesthetic demands. Scrotal reconstruction requires a higher aesthetic and functional reconstructive degree than perineal cutaneous wounds. In general, thorough wound hygiene, proper pre-operative planning, and careful consideration of the patient's demands are essential for successful reconstruction. In the literature, various methods for reconstruction after Fournier gangrene are described. Reconstruction with a flap is required for a good functional result in complex regions as the scrotum and penis, while cutaneous wounds can be managed through skin grafting. Patient compliance and tissue demand are crucial factors in the decision-making process.
Method and apparatus for resonant frequency waveform modulation
Taubman, Matthew S [Richland, WA
2011-06-07
A resonant modulator device and process are described that provide enhanced resonant frequency waveforms to electrical devices including, e.g., laser devices. Faster, larger, and more complex modulation waveforms are obtained than can be obtained by use of conventional current controllers alone.
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-04-12
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-01-01
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have
A Denoising Method for LiDAR Full-Waveform Data
Directory of Open Access Journals (Sweden)
Xudong Lai
2015-01-01
Full Text Available Decomposition of LiDAR full-waveform data can not only enhance the density and positioning accuracy of a point cloud, but also provide other useful parameters, such as pulse width, peak amplitude, and peak position which are important information for subsequent processing. Full-waveform data usually contain some random noises. Traditional filtering algorithms always cause distortion in the waveform. λ/μ filtering algorithm is based on Mean Shift method. It can smooth the signal iteratively and will not cause any distortion in the waveform. In this paper, an improved λ/μ filtering algorithm is proposed, and several experiments on both simulated waveform data and real waveform data are implemented to prove the effectiveness of the proposed algorithm.
ASME method for particle reconstruction
International Nuclear Information System (INIS)
Ierusalimov, A.P.
2009-01-01
The method of approximate solution of motion equation (ASME) was used to reconstruct the parameters for charged particles. It provides a good precision for momentum, angular and space parameters of particles in coordinate detectors. The application of the method for CBM, HADES and MPD/NICA setups is discussed
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen; Alkhalifah, Tariq Ali
2018-01-01
hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI
Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan
2018-04-01
Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.
A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms
International Nuclear Information System (INIS)
Lu Wei; Nystrom, Michelle M.; Parikh, Parag J.; Fooshee, David R.; Hubenschmidt, James P.; Bradley, Jeffrey D.; Low, Daniel A.
2006-01-01
The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations
Phylogenetic reconstruction methods: an overview.
De Bruyn, Alexandre; Martin, Darren P; Lefeuvre, Pierre
2014-01-01
Initially designed to infer evolutionary relationships based on morphological and physiological characters, phylogenetic reconstruction methods have greatly benefited from recent developments in molecular biology and sequencing technologies with a number of powerful methods having been developed specifically to infer phylogenies from macromolecular data. This chapter, while presenting an overview of basic concepts and methods used in phylogenetic reconstruction, is primarily intended as a simplified step-by-step guide to the construction of phylogenetic trees from nucleotide sequences using fairly up-to-date maximum likelihood methods implemented in freely available computer programs. While the analysis of chloroplast sequences from various Vanilla species is used as an illustrative example, the techniques covered here are relevant to the comparative analysis of homologous sequences datasets sampled from any group of organisms.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
A new method for measuring the wall charge waveforms of AC PDP
International Nuclear Information System (INIS)
Liang Zhihu; Liu Zujun; Liu Chunliang
2004-01-01
A new method is developed to measure the wall charge waveforms in coplanar alternating current plasma display panel (AC PDP). In the method, two groups of display electrodes are selected from a coplanar AC PDP and two capacitors are respectively connected with these two groups of display electrodes in series, and a measuring circuit and a reference circuit are thus constructed. With the help of special processing, discharge takes place in the cells included in the measuring circuit under a normal drive voltage but no discharge takes place in the cells included in the reference circuit under a normal drive voltage. The wall charge waveforms are obtained from the voltage difference between the two capacitors. Using the method, the wall charge waveforms are measured during resetting period, addressing period and sustaining period for the 304.8 mm (12-inch) test PDP panel. The result shows that the wall voltage is about 96 V during the sustaining period. (authors)
Waveform control for magnetic testers using a quasi-Newton method
International Nuclear Information System (INIS)
Yamamoto, Ken-ichi; Hanba, Shigeru
2008-01-01
A nonlinear iterative learning algorithm is proposed to make a voltage waveform in the secondary coil sinusoidal in this paper. The algorithm employs a globally convergent Jacobian-free quasi-Newton type solver that has a BFGS-like structure. This method functions well, and it is demonstrated using typical soft magnetic materials
Micro-seismic Imaging Using a Source Independent Waveform Inversion Method
Wang, Hanchen
2016-01-01
waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking
Remote Blood Pressure Waveform Sensing Method and Apparatus
National Research Council Canada - National Science Library
Antonelli, Lynn T
2008-01-01
The invention as disclosed is a non-contact method and apparatus for continuously monitoring a physiological event in a human or animal, such as blood pressure, which involves utilizing a laser-based...
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu
2017-03-01
In this work, we construct a multi-frequency accelerating strategy for the contrast source inversion (CSI) method using pulse data in the time domain. CSI is a frequency-domain inversion method for ultrasound waveform tomography that does not require the forward solver through the process of reconstruction. Several prior researches show that the CSI method has a good performance of convergence and accuracy in the low-center-frequency situation. In contrast, utilizing the high-center-frequency data leads to a high-resolution reconstruction but slow convergence on large numbers of grid. Our objective is to take full advantage of all low frequency components from pulse data with the high-center-frequency data measured by the diagnostic device. First we process the raw data in the frequency domain. Then multi-frequency accelerating strategy helps restart CSI in the current frequency using the last iteration result obtained from the lower frequency component. The merit of multi- frequency accelerating strategy is that computational burden decreases at the first few iterations. Because the low frequency component of dataset computes on the coarse grid with assuming a fixed number of points per wavelength. In the numerical test, the pulse data were generated by the K-wave simulator and have been processed to meet the computation of the CSI method. We investigate the performance of the multi-frequency and single-frequency reconstructions and conclude that the multi-frequency accelerating strategy significantly enhances the quality of the reconstructed image and simultaneously reduces the average computational time for any iteration step.
Zhang, Zhendong
2017-07-11
Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate update direction leading the inversion to converge what we refer to as local minima of the objective function. In our approach, we consider mild lateral variation in the model, and thus, use a gradient given by the oriented time-domain imaging method. Specifically, we apply the oriented time-domain imaging on the data residual to obtain the geometrical features of the velocity perturbation. After updating the model in the time domain, we convert the perturbation from the time domain to depth using the average velocity. Considering density is constant, we can expand the conventional 1D impedance inversion method to 2D or 3D velocity inversion within the process of full waveform inversion. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reection response. To eliminate the cross-talk artifacts between different parameters, we utilize what we consider being an optimal parametrization for this step. To do so, we extend the prestack time-domain migration image in incident angle dimension to incorporate angular dependence needed by the multiparameter inversion. For simple models, this approach provides an efficient and stable way to do full waveform inversion or modified seismic inversion and makes the anisotropic inversion more practicable. The proposed method still needs kinematically accurate initial models since it only recovers the high-wavenumber part as conventional full waveform inversion method does. Results on synthetic data of isotropic and anisotropic cases illustrate the benefits and limitations of this method.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
System and Method for Generating a Frequency Modulated Linear Laser Waveform
Pierrottet, Diego F. (Inventor); Petway, Larry B. (Inventor); Amzajerdian, Farzin (Inventor); Barnes, Bruce W. (Inventor); Lockard, George E. (Inventor); Hines, Glenn D. (Inventor)
2017-01-01
A system for generating a frequency modulated linear laser waveform includes a single frequency laser generator to produce a laser output signal. An electro-optical modulator modulates the frequency of the laser output signal to define a linear triangular waveform. An optical circulator passes the linear triangular waveform to a band-pass optical filter to filter out harmonic frequencies created in the waveform during modulation of the laser output signal, to define a pure filtered modulated waveform having a very narrow bandwidth. The optical circulator receives the pure filtered modulated laser waveform and transmits the modulated laser waveform to a target.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Energy Technology Data Exchange (ETDEWEB)
Soleimani, Effat; Mokhtari-Dizaji, Manijhe [Dept. of Medical Physics, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Fatouraee, Nasser [Dept. of Medical Engineering, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Saben, Hazhir [Dept. Radiology, Imaging Center of Imam Khomaini Hospital, Tehran Medical Sciences University, Tehran (Iran, Islamic Republic of)
2017-04-15
The aim of this study was to introduce and implement a noninvasive method to derive the carotid artery pressure waveform directly by processing diagnostic sonograms of the carotid artery. Ultrasound image sequences of 20 healthy male subjects (age, 36±9 years) were recorded during three cardiac cycles. The internal diameter and blood velocity waveforms were extracted from consecutive sonograms over the cardiac cycles by using custom analysis programs written in MATLAB. Finally, the application of a mathematical equation resulted in time changes of the arterial pressure. The resulting pressures were calibrated using the mean and the diastolic pressure of the radial artery. A good correlation was found between the mean carotid blood pressure obtained from the ultrasound image processing and the mean radial blood pressure obtained using a standard digital sphygmomanometer (R=0.91). The mean absolute difference between the carotid calibrated pulse pressures and those measured clinically was -1.333±6.548 mm Hg. The results of this study suggest that consecutive sonograms of the carotid artery can be used for estimating a blood pressure waveform. We believe that our results promote a noninvasive technique for clinical applications that overcomes the reproducibility problems of common carotid artery tonometry with technical and anatomical causes.
International Nuclear Information System (INIS)
Soleimani, Effat; Mokhtari-Dizaji, Manijhe; Fatouraee, Nasser; Saben, Hazhir
2017-01-01
The aim of this study was to introduce and implement a noninvasive method to derive the carotid artery pressure waveform directly by processing diagnostic sonograms of the carotid artery. Ultrasound image sequences of 20 healthy male subjects (age, 36±9 years) were recorded during three cardiac cycles. The internal diameter and blood velocity waveforms were extracted from consecutive sonograms over the cardiac cycles by using custom analysis programs written in MATLAB. Finally, the application of a mathematical equation resulted in time changes of the arterial pressure. The resulting pressures were calibrated using the mean and the diastolic pressure of the radial artery. A good correlation was found between the mean carotid blood pressure obtained from the ultrasound image processing and the mean radial blood pressure obtained using a standard digital sphygmomanometer (R=0.91). The mean absolute difference between the carotid calibrated pulse pressures and those measured clinically was -1.333±6.548 mm Hg. The results of this study suggest that consecutive sonograms of the carotid artery can be used for estimating a blood pressure waveform. We believe that our results promote a noninvasive technique for clinical applications that overcomes the reproducibility problems of common carotid artery tonometry with technical and anatomical causes
Directory of Open Access Journals (Sweden)
Tan Zhou
2017-12-01
Full Text Available A plethora of information contained in full-waveform (FW Light Detection and Ranging (LiDAR data offers prospects for characterizing vegetation structures. This study aims to investigate the capacity of FW LiDAR data alone for tree species identification through the integration of waveform metrics with machine learning methods and Bayesian inference. Specifically, we first conducted automatic tree segmentation based on the waveform-based canopy height model (CHM using three approaches including TreeVaW, watershed algorithms and the combination of TreeVaW and watershed (TW algorithms. Subsequently, the Random forests (RF and Conditional inference forests (CF models were employed to identify important tree-level waveform metrics derived from three distinct sources, such as raw waveforms, composite waveforms, the waveform-based point cloud and the combined variables from these three sources. Further, we discriminated tree (gray pine, blue oak, interior live oak and shrub species through the RF, CF and Bayesian multinomial logistic regression (BMLR using important waveform metrics identified in this study. Results of the tree segmentation demonstrated that the TW algorithms outperformed other algorithms for delineating individual tree crowns. The CF model overcomes waveform metrics selection bias caused by the RF model which favors correlated metrics and enhances the accuracy of subsequent classification. We also found that composite waveforms are more informative than raw waveforms and waveform-based point cloud for characterizing tree species in our study area. Both classical machine learning methods (the RF and CF and the BMLR generated satisfactory average overall accuracy (74% for the RF, 77% for the CF and 81% for the BMLR and the BMLR slightly outperformed the other two methods. However, these three methods suffered from low individual classification accuracy for the blue oak which is prone to being misclassified as the interior live oak due
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen
2018-03-26
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Waveform control method for mitigating harmonics of inverter systems with nonlinear load
DEFF Research Database (Denmark)
Wang, Haoran; Zhu, Guorong; Fu, Xiaobin
2015-01-01
instability in the DC power system, lower its efficiency, and shorten the lifetime of the DC source. This paper presents a general waveform control method that can mitigate the injection of the low-frequency ripple current by the single-phase DC/AC inverter into the DC source. It also discusses the inhibiting......DC power systems connecting to single-phase DC/AC inverters with nonlinear loads will have their DC sources being injected with AC ripple currents containing a low-frequency component at twice the output voltage frequency of the inverter and also other current harmonics. Such a current may create...
A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.
Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M
2014-12-11
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.
Directory of Open Access Journals (Sweden)
Andre Lamert
2018-03-01
Full Text Available We present and compare two flexible and effective methodologies to predict disturbance zones ahead of underground tunnels by using elastic full-waveform inversion. One methodology uses a linearized, iterative approach based on misfit gradients computed with the adjoint method while the other uses iterative, gradient-free unscented Kalman filtering in conjunction with a level-set representation. Whereas the former does not involve a priori assumptions on the distribution of elastic properties ahead of the tunnel, the latter introduces a massive reduction in the number of explicit model parameters to be inverted for by focusing on the geometric form of potential disturbances and their average elastic properties. Both imaging methodologies are validated through successful reconstructions of simple disturbances. As an application, we consider an elastic multiple disturbance scenario. By using identical synthetic time-domain seismograms as test data, we obtain satisfactory, albeit different, reconstruction results from the two inversion methodologies. The computational costs of both approaches are of the same order of magnitude, with the gradient-based approach showing a slight advantage. The model parameter space reduction approach compensates for this by additionally providing a posteriori estimates of model parameter uncertainty. Keywords: Tunnel seismics, Full waveform inversion, Seismic waves, Level-set method, Adjoint method, Kalman filter
Full-waveform detection of non-impulsive seismic events based on time-reversal methods
Solano, Ericka Alinne; Hjörleifsdóttir, Vala; Liu, Qinya
2017-12-01
We present a full-waveform detection method for non-impulsive seismic events, based on time-reversal principles. We use the strain Green's tensor as a matched filter, correlating it with continuous observed seismograms, to detect non-impulsive seismic events. We show that this is mathematically equivalent to an adjoint method for detecting earthquakes. We define the detection function, a scalar valued function, which depends on the stacked correlations for a group of stations. Event detections are given by the times at which the amplitude of the detection function exceeds a given value relative to the noise level. The method can make use of the whole seismic waveform or any combination of time-windows with different filters. It is expected to have an advantage compared to traditional detection methods for events that do not produce energetic and impulsive P waves, for example glacial events, landslides, volcanic events and transform-fault earthquakes for events which velocity structure along the path is relatively well known. Furthermore, the method has advantages over empirical Greens functions template matching methods, as it does not depend on records from previously detected events, and therefore is not limited to events occurring in similar regions and with similar focal mechanisms as these events. The method is not specific to any particular way of calculating the synthetic seismograms, and therefore complicated structural models can be used. This is particularly beneficial for intermediate size events that are registered on regional networks, for which the effect of lateral structure on the waveforms can be significant. To demonstrate the feasibility of the method, we apply it to two different areas located along the mid-oceanic ridge system west of Mexico where non-impulsive events have been reported. The first study area is between Clipperton and Siqueiros transform faults (9°N), during the time of two earthquake swarms, occurring in March 2012 and May
Energy Technology Data Exchange (ETDEWEB)
Watanabe, T. [Nagoya Univ., Nagoya (Japan). Research Center for Seismology and Volcanology; Shimizu, S. [Japan National Oil Co., Chiba (Japan); Asakawa, E. [JGI Inc., Tokyo (Japan); Kamei, R.; Matsuoka, T. [Kyoto Univ., Kyoto (Japan). Dept. of Civil and Earth Resources Engineering
2005-07-01
The JAPEX/JNOC/GSC et al. Mallik 3L-38 and 4L-38 test wells were subjected to repeated cross-well seismic surveys before and during the gas production test from the gas-hydrate-bearing layer at a depth of about 905 to 920 m. High-resolution velocity images in the gas-hydrate-bearing layer were obtained using the frequency-domain nonlinear waveform inversion method. An acoustic waveform inversion method was used along with the frequency-domain approach in order to detect the change in acoustic properties of the gas-hydrate-bearing layer resulting from gas production. The layered structures with small lateral heterogeneities were reconstructed by tomography analysis of preprocessed waveform data using a smaller number of source-receiver pairs. Preliminary results reveal high velocity and very high attenuation in the gas-hydrate-bearing layers. No obvious velocity decreases were noted. Information for further detailed processing was also gathered.
Microseismic imaging using a source-independent full-waveform inversion method
Wang, Hanchen
2016-09-06
Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.
Microseismic imaging using a source-independent full-waveform inversion method
Wang, Hanchen
2016-01-01
Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.
Karaoǧlu, Haydar; Romanowicz, Barbara
2018-06-01
We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the
Full Seismic Waveform Tomography of the Japan region using Adjoint Methods
Steptoe, Hamish; Fichtner, Andreas; Rickers, Florian; Trampert, Jeannot
2013-04-01
We present a full-waveform tomographic model of the Japan region based on spectral-element wave propagation, adjoint techniques and seismic data from dense station networks. This model is intended to further our understanding of both the complex regional tectonics and the finite rupture processes of large earthquakes. The shallow Earth structure of the Japan region has been the subject of considerable tomographic investigation. The islands of Japan exist in an area of significant plate complexity: subduction related to the Pacific and Philippine Sea plates is responsible for the majority of seismicity and volcanism of Japan, whilst smaller micro-plates in the region, including the Okhotsk, and Okinawa and Amur, part of the larger North America and Eurasia plates respectively, contribute significant local intricacy. In response to the need to monitor and understand the motion of these plates and their associated faults, numerous seismograph networks have been established, including the 768 station high-sensitivity Hi-net network, 84 station broadband F-net and the strong-motion seismograph networks K-net and KiK-net in Japan. We also include the 55 station BATS network of Taiwan. We use this exceptional coverage to construct a high-resolution model of the Japan region from the full-waveform inversion of over 15,000 individual component seismograms from 53 events that occurred between 1997 and 2012. We model these data using spectral-element simulations of seismic wave propagation at a regional scale over an area from 120°-150°E and 20°-50°N to a depth of around 500 km. We quantify differences between observed and synthetic waveforms using time-frequency misfits allowing us to separate both phase and amplitude measurements whilst exploiting the complete waveform at periods of 15-60 seconds. Fréchet kernels for these misfits are calculated via the adjoint method and subsequently used in an iterative non-linear conjugate-gradient optimization. Finally, we employ
Apparatus and method for reconstructing data
International Nuclear Information System (INIS)
1981-01-01
A method and apparatus is described for constructing a two-dimensional picture of an object slice from linear projections of radiation not absorbed or scattered by the object, using convolution methods of data reconstruction, useful in the fields of medical radiology, microscopy, and non-destructive testing. (U.K.)
An intelligent detection method for high-field asymmetric waveform ion mobility spectrometry.
Li, Yue; Yu, Jianwen; Ruan, Zhiming; Chen, Chilai; Chen, Ran; Wang, Han; Liu, Youjiang; Wang, Xiaozhi; Li, Shan
2018-04-01
In conventional high-field asymmetric waveform ion mobility spectrometry signal acquisition, multi-cycle detection is time consuming and limits somewhat the technique's scope for rapid field detection. In this study, a novel intelligent detection approach has been developed in which a threshold was set on the relative error of α parameters, which can eliminate unnecessary time spent on detection. In this method, two full-spectrum scans were made in advance to obtain the estimated compensation voltage at different dispersion voltages, resulting in a narrowing down of the whole scan area to just the peak area(s) of interest. This intelligent detection method can reduce the detection time to 5-10% of that of the original full-spectrum scan in a single cycle.
Accurate Methods for Signal Processing of Distorted Waveforms in Power Systems
Directory of Open Access Journals (Sweden)
Langella R
2007-01-01
Full Text Available A primary problem in waveform distortion assessment in power systems is to examine ways to reduce the effects of spectral leakage. In the framework of DFT approaches, line frequency synchronization techniques or algorithms to compensate for desynchronization are necessary; alternative approaches such as those based on the Prony and ESPRIT methods are not sensitive to desynchronization, but they often require significant computational burden. In this paper, the signal processing aspects of the problem are considered; different proposals by the same authors regarding DFT-, Prony-, and ESPRIT-based advanced methods are reviewed and compared in terms of their accuracy and computational efforts. The results of several numerical experiments are reported and analysed; some of them are in accordance with IEC Standards, while others use more open scenarios.
A New Waveform Signal Processing Method Based on Adaptive Clustering-Genetic Algorithms
International Nuclear Information System (INIS)
Noha Shaaban; Fukuzo Masuda; Hidetsugu Morota
2006-01-01
We present a fast digital signal processing method for numerical analysis of individual pulses from CdZnTe compound semiconductor detectors. Using Maxi-Mini Distance Algorithm and Genetic Algorithms based discrimination technique. A parametric approach has been used for classifying the discriminated waveforms into a set of clusters each has a similar signal shape with a corresponding pulse height spectrum. A corrected total pulse height spectrum was obtained by applying a normalization factor for the full energy peak for each cluster with a highly improvements in the energy spectrum characteristics. This method applied successfully for both simulated and real measured data, it can be applied to any detector suffers from signal shape variation. (authors)
Liu, Xiwu; Guo, Zhiqi; Han, Xu
2018-06-01
A set of parallel vertical fractures embedded in a vertically transverse isotropy (VTI) background leads to orthorhombic anisotropy and corresponding azimuthal seismic responses. We conducted seismic modeling of full waveform amplitude variations versus azimuth (AVAZ) responses of anisotropic shale by integrating a rock physics model and a reflectivity method. The results indicate that the azimuthal variation of P-wave velocity tends to be more complicated for orthorhombic medium compared to the horizontally transverse isotropy (HTI) case, especially at high polar angles. Correspondingly, for the HTI layer in the theoretical model, the short axis of the azimuthal PP amplitudes at the top interface is parallel to the fracture strike, while the long axis at the bottom reflection directs the fracture strike. In contrast, the orthorhombic layer in the theoretical model shows distinct AVAZ responses in terms of PP reflections. Nevertheless, the azimuthal signatures of the R- and T-components of the mode-converted PS reflections show similar AVAZ features for the HTI and orthorhombic layers, which may imply that the PS responses are dominated by fractures. For the application to real data, a seismic-well tie based on upscaled data and a reflectivity method illustrate good agreement between the reference layers and the corresponding reflected events. Finally, the full waveform seismic AVAZ responses of the Longmaxi shale formation are computed for the cases of HTI and orthorhombic anisotropy for comparison. For the two cases, the azimuthal features represent differences mainly in amplitudes, while slightly in the phases of the reflected waveforms. Azimuth variations in the PP reflections from the reference layers show distinct behaviors for the HTI and orthorhombic cases, while the mode-converted PS reflections in terms of the R- and T-components show little differences in azimuthal features. It may suggest that the behaviors of the PS waves are dominated by vertically
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
Image reconstruction methods in positron tomography
International Nuclear Information System (INIS)
Townsend, D.W.; Defrise, M.
1993-01-01
In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-ray but also for studies which explore the functional status of the body using positron-emitting radioisotopes. This report reviews the historical and physical basis of medical imaging techniques using positron-emitting radioisotopes. Mathematical methods which enable three-dimensional distributions of radioisotopes to be reconstructed from projection data (sinograms) acquired by detectors suitably positioned around the patient are discussed. The extension of conventional two-dimensional tomographic reconstruction algorithms to fully three-dimensional reconstruction is described in detail. (orig.)
New method for initial density reconstruction
Shi, Yanlong; Cautun, Marius; Li, Baojiu
2018-01-01
A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms that were very recently put forward in the literature, with the reconstructed density field over ˜80 % (50%) correlated with the initial condition at k ≲0.6 h /Mpc (1.0 h /Mpc ). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction.
On an image reconstruction method for ECT
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Magnetic flux reconstruction methods for shaped tokamaks
International Nuclear Information System (INIS)
Tsui, Chi-Wa.
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising
Micro-seismic Imaging Using a Source Independent Waveform Inversion Method
Wang, Hanchen
2016-04-18
Micro-seismology is attracting more and more attention in the exploration seismology community. The main goal in micro-seismic imaging is to find the source location and the ignition time in order to track the fracture expansion, which will help engineers monitor the reservoirs. Conventional imaging methods work fine in this field but there are many limitations such as manual picking, incorrect migration velocity and low signal to noise ratio (S/N). In traditional surface survey imaging, full waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. Use the FWI technique, and overcomes the difficulties of manual pickings and incorrect velocity model for migration. However, the technique of waveform inversion of micro-seismic events faces its own problems. There is significant nonlinearity due to the unknown source location (space) and function (time). We have developed a source independent FWI of micro-seismic events to simultaneously invert for the source image, source function and velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. To examine the accuracy of the inverted source image and velocity model the extended image for source wavelet in z-axis is extracted. Also the angle gather is calculated to check the applicability of the migration velocity. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity in the synthetic experiments with both parts of the Marmousi and the SEG
Apparatus and method for reconstructing data
International Nuclear Information System (INIS)
Pavkovich, J.M.
1977-01-01
The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)
Zhang, Zhendong; Alkhalifah, Tariq Ali
2017-01-01
Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate
Perturbation methods for power and reactivity reconstruction
International Nuclear Information System (INIS)
Palmiotti, G.; Salvatores, M.; Estiot, J.C.; Broccoli, U.; Bruna, G.; Gomit, J.M.
1987-01-01
This paper deals with recent developments and applications in perturbation methods. Two types of methods are used. The first one is an explicit method, which allows the explicit reconstruction of a perturbed flux using a linear combination of a library of functions. In our application, these functions are the harmonics (i.e. the high order eigenfunctions of the system). The second type is based on the Generalized Perturbation Theory GPT and needs the calculation of an importance function for each integral parameter of interest. Recent developments of a particularly useful high order formulation allows to obtain satisfactory results also for very large perturbations
International Nuclear Information System (INIS)
Berthe, P.M.
2013-01-01
In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr
Revised method for forest canopy height estimation from Geoscience Laser Altimeter System waveforms.
Michael A. Lefskya; Michael Keller; Yong Panga; Plinio B. de Camargod; Maria O. Hunter
2007-01-01
The vertical extent of waveforms collected by the Geoscience Laser Altimeter System (onboard ICESat - the Ice, Cloud, and land Elevation Satellite) increases as a function of terrain slope and footprint size (the area on the ground that is illuminated by the laser). Over sloped terrain, returns from both canopy and ground surfaces can occur at the same elevation. As a...
A New Wave Equation Based Source Location Method with Full-waveform Inversion
Wu, Zedong; Alkhalifah, Tariq Ali
2017-01-01
with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using
Research of ART method in CT image reconstruction
International Nuclear Information System (INIS)
Li Zhipeng; Cong Peng; Wu Haifeng
2005-01-01
This paper studied Algebraic Reconstruction Technique (ART) in CT image reconstruction. Discussed the ray number influence on image quality. And the adopting of smooth method got high quality CT image. (authors)
Method of reconstructing a moving pulse
Energy Technology Data Exchange (ETDEWEB)
Howard, S J; Horton, R D; Hwang, D Q; Evans, R W; Brockington, S J; Johnson, J [UC Davis Department of Applied Science, Livermore, CA, 94551 (United States)
2007-11-15
We present a method of analyzing a set of N time signals f{sub i}(t) that consist of local measurements of the same physical observable taken at N sequential locations Z{sub i} along the length of an experimental device. The result is an algorithm for reconstructing an approximation F(z,t) of the field f(z,t) in the inaccessible regions between the points of measurement. We also explore the conditions needed for this approximation to hold, and test the algorithm under a variety of conditions. We apply this method to analyze the magnetic field measurements taken on the Compact Toroid Injection eXperiment (CTIX) plasma accelerator; providing a direct means of visualizing experimental data, quantifying global properties, and benchmarking simulation.
Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics
International Nuclear Information System (INIS)
Luo, Hong; Xia, Yidong; Nourgaliev, Robert
2011-01-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)
Image-reconstruction methods in positron tomography
Townsend, David W; CERN. Geneva
1993-01-01
Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...
A Concealed Car Extraction Method Based on Full-Waveform LiDAR Data
Directory of Open Access Journals (Sweden)
Chuanrong Li
2016-01-01
Full Text Available Concealed cars extraction from point clouds data acquired by airborne laser scanning has gained its popularity in recent years. However, due to the occlusion effect, the number of laser points for concealed cars under trees is not enough. Thus, the concealed cars extraction is difficult and unreliable. In this paper, 3D point cloud segmentation and classification approach based on full-waveform LiDAR was presented. This approach first employed the autocorrelation G coefficient and the echo ratio to determine concealed cars areas. Then the points in the concealed cars areas were segmented with regard to elevation distribution of concealed cars. Based on the previous steps, a strategy integrating backscattered waveform features and the view histogram descriptor was developed to train sample data of concealed cars and generate the feature pattern. Finally concealed cars were classified by pattern matching. The approach was validated by full-waveform LiDAR data and experimental results demonstrated that the presented approach can extract concealed cars with accuracy more than 78.6% in the experiment areas.
Capdeville, Yann; Métivier, Ludovic
2018-05-01
Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.
Overview of intelligent data retrieval methods for waveforms and images in massive fusion databases
Energy Technology Data Exchange (ETDEWEB)
Vega, J. [JET-EFDA, Culham Science Center, OX14 3DB Abingdon (United Kingdom); Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense 22, 28040 Madrid (Spain)], E-mail: jesus.vega@ciemat.es; Murari, A. [JET-EFDA, Culham Science Center, OX14 3DB Abingdon (United Kingdom); Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padua (Italy); Pereira, A.; Portas, A.; Ratta, G.A.; Castro, R. [JET-EFDA, Culham Science Center, OX14 3DB Abingdon (United Kingdom); Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense 22, 28040 Madrid (Spain)
2009-06-15
JET database contains more than 42 Tbytes of data (waveforms and images) and it doubles its size about every 2 years. ITER database is expected to be orders of magnitude above this quantity. Therefore, data access in such huge databases can no longer be efficiently based on shot number or temporal interval. Taking into account that diagnostics generate reproducible signal patterns (structural shapes) for similar physical behaviour, high level data access systems can be developed. In these systems, the input parameter is a pattern and the outputs are the shot numbers and the temporal locations where similar patterns appear inside the database. These pattern oriented techniques can be used for first data screening of any type of morphological aspect of waveforms and images. The article shows a new technique to look for similar images in huge databases in a fast an efficient way. Also, previous techniques to search for similar waveforms and to retrieve time-series data or images containing any kind of patterns are reviewed.
International Nuclear Information System (INIS)
Bernuzzi, Sebastiano; Nagar, Alessandro; Zenginoglu, Anil
2011-01-01
We compute and analyze the gravitational waveform emitted to future null infinity by a system of two black holes in the large-mass-ratio limit. We consider the transition from the quasiadiabatic inspiral to plunge, merger, and ringdown. The relative dynamics is driven by a leading order in the mass ratio, 5PN-resummed, effective-one-body (EOB), analytic-radiation reaction. To compute the waveforms, we solve the Regge-Wheeler-Zerilli equations in the time-domain on a spacelike foliation, which coincides with the standard Schwarzschild foliation in the region including the motion of the small black hole, and is globally hyperboloidal, allowing us to include future null infinity in the computational domain by compactification. This method is called the hyperboloidal layer method, and is discussed here for the first time in a study of the gravitational radiation emitted by black hole binaries. We consider binaries characterized by five mass ratios, ν=10 -2,-3,-4,-5,-6 , that are primary targets of space-based or third-generation gravitational wave detectors. We show significative phase differences between finite-radius and null-infinity waveforms. We test, in our context, the reliability of the extrapolation procedure routinely applied to numerical relativity waveforms. We present an updated calculation of the final and maximum gravitational recoil imparted to the merger remnant by the gravitational wave emission, v kick end /(cν 2 )=0.04474±0.00007 and v kick max /(cν 2 )=0.05248±0.00008. As a self-consistency test of the method, we show an excellent fractional agreement (even during the plunge) between the 5PN EOB-resummed mechanical angular momentum loss and the gravitational wave angular momentum flux computed at null infinity. New results concerning the radiation emitted from unstable circular orbits are also presented. The high accuracy waveforms computed here could be considered for the construction of template banks or for calibrating analytic models such
New weighting methods for phylogenetic tree reconstruction using multiple loci.
Misawa, Kazuharu; Tajima, Fumio
2012-08-01
Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.
[Development and current situation of reconstruction methods following total sacrectomy].
Huang, Siyi; Ji, Tao; Guo, Wei
2018-05-01
To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.
DEFF Research Database (Denmark)
Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell
2007-01-01
in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...
Multicore Performance of Block Algebraic Iterative Reconstruction Methods
DEFF Research Database (Denmark)
Sørensen, Hans Henrik B.; Hansen, Per Christian
2014-01-01
Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....
Evaluation of proxy-based millennial reconstruction methods
Energy Technology Data Exchange (ETDEWEB)
Lee, Terry C.K.; Tsao, Min [University of Victoria, Department of Mathematics and Statistics, Victoria, BC (Canada); Zwiers, Francis W. [Environment Canada, Climate Research Division, Toronto, ON (Canada)
2008-08-15
A range of existing statistical approaches for reconstructing historical temperature variations from proxy data are compared using both climate model data and real-world paleoclimate proxy data. We also propose a new method for reconstruction that is based on a state-space time series model and Kalman filter algorithm. The state-space modelling approach and the recently developed RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing surface air temperature variability on decadal time scales. An advantage of the new method is that it can incorporate additional, non-temperature, information into the reconstruction, such as the estimated response to external forcing, thereby permitting a simultaneous reconstruction and detection analysis as well as future projection. An application of these extensions is also demonstrated in the paper. (orig.)
International Nuclear Information System (INIS)
Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell; Knutsen, Bjoern Helge; Roeislien, Jo; Olsen, Dag Rune
2007-01-01
The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy
DEFF Research Database (Denmark)
Troelsen, Jens; Meincke, Peter; Breinbjerg, Olav
2000-01-01
into account. To the knowledge of the authors the AWE technique has not previously been applied to a MoM solution based on this kind of integral equation. It is the purpose of this paper to investigate the use of the AWE technique as a tool to obtain a fast frequency sweep of the field scattered......In many radar applications it is necessary to determine the scattering from an object over a wide frequency band. The asymptotic waveform evaluation (AWE), which is a moment matching (MM) technique, constitutes a method to this end. In general, MM techniques provide a reduced-order model...
Software Architecture Reconstruction Method, a Survey
Zainab Nayyar; Nazish Rafique
2014-01-01
Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...
A New Wave Equation Based Source Location Method with Full-waveform Inversion
Wu, Zedong
2017-05-26
Locating the source of a passively recorded seismic event is still a challenging problem, especially when the velocity is unknown. Many imaging approaches to focus the image do not address the velocity issue and result in images plagued with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using an extended source. At the later stages the focusing of the image dominates the inversion allowing for high resolution source and velocity inversion. We also compute the source location explicitly and numerical tests show that we obtain good estimates of the source locations with this approach.
Blind compressed sensing image reconstruction based on alternating direction method
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
High resolution x-ray CMT: Reconstruction methods
Energy Technology Data Exchange (ETDEWEB)
Brown, J.K.
1997-02-01
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.
Limb reconstruction with the Ilizarov method
Oostenbroek, H.J.
2014-01-01
In chapter 1, the background and origins of this study are explained. The aims of the study are defined. In chapter 2, an analysis of the complications rate of limb reconstruction in a cohort of 37 consecutive growing children was done. Several patient and deformity factors were investigated by
AIR Tools - A MATLAB package of algebraic iterative reconstruction methods
DEFF Research Database (Denmark)
Hansen, Per Christian; Saxild-Hansen, Maria
2012-01-01
We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...
The optimized gradient method for full waveform inversion and its spectral implementation
Wu, Zedong; Alkhalifah, Tariq Ali
2016-01-01
At the heart of the full waveform inversion (FWI) implementation is wavefield extrapolation, and specifically its accuracy and cost. To obtain accurate, dispersion free wavefields, the extrapolation for modelling is often expensive. Combining an efficient extrapolation with a novel gradient preconditioning can render an FWI implementation that efficiently converges to an accurate model. We, specifically, recast the extrapolation part of the inversion in terms of its spectral components for both data and gradient calculation. This admits dispersion free wavefields even at large extrapolation time steps, which improves the efficiency of the inversion. An alternative spectral representation of the depth axis in terms of sine functions allows us to impose a free surface boundary condition, which reflects our medium boundaries more accurately. Using a newly derived perfectly matched layer formulation for this spectral implementation, we can define a finite model with absorbing boundaries. In order to reduce the nonlinearity in FWI, we propose a multiscale conditioning of the objective function through combining the different directional components of the gradient to optimally update the velocity. Through solving a simple optimization problem, it specifically admits the smoothest approximate update while guaranteeing its ascending direction. An application to the Marmousi model demonstrates the capability of the proposed approach and justifies our assertions with respect to cost and convergence.
The optimized gradient method for full waveform inversion and its spectral implementation
Wu, Zedong
2016-03-28
At the heart of the full waveform inversion (FWI) implementation is wavefield extrapolation, and specifically its accuracy and cost. To obtain accurate, dispersion free wavefields, the extrapolation for modelling is often expensive. Combining an efficient extrapolation with a novel gradient preconditioning can render an FWI implementation that efficiently converges to an accurate model. We, specifically, recast the extrapolation part of the inversion in terms of its spectral components for both data and gradient calculation. This admits dispersion free wavefields even at large extrapolation time steps, which improves the efficiency of the inversion. An alternative spectral representation of the depth axis in terms of sine functions allows us to impose a free surface boundary condition, which reflects our medium boundaries more accurately. Using a newly derived perfectly matched layer formulation for this spectral implementation, we can define a finite model with absorbing boundaries. In order to reduce the nonlinearity in FWI, we propose a multiscale conditioning of the objective function through combining the different directional components of the gradient to optimally update the velocity. Through solving a simple optimization problem, it specifically admits the smoothest approximate update while guaranteeing its ascending direction. An application to the Marmousi model demonstrates the capability of the proposed approach and justifies our assertions with respect to cost and convergence.
Adaptive multiresolution method for MAP reconstruction in electron tomography
Energy Technology Data Exchange (ETDEWEB)
Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)
2016-11-15
3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.
Geometric reconstruction methods for electron tomography
Energy Technology Data Exchange (ETDEWEB)
Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)
2013-05-15
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.
Geometric reconstruction methods for electron tomography
International Nuclear Information System (INIS)
Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees
2013-01-01
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand
Assessing the Accuracy of Ancestral Protein Reconstruction Methods
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-01-01
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolu...
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
A comparison of ancestral state reconstruction methods for quantitative characters.
Royer-Carenzi, Manuela; Didier, Gilles
2016-09-07
Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Pazos, Enrique [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Dorband, Ernst Nils [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Nagar, Alessandro [Dipartimento di Fisica, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Torino (Italy); Palenzuela, Carlos [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Schnetter, Erik [Center for Computation and Technology, 216 Johnston Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Tiglio, Manuel [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States)
2007-06-21
We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors
International Nuclear Information System (INIS)
Pazos, Enrique; Dorband, Ernst Nils; Nagar, Alessandro; Palenzuela, Carlos; Schnetter, Erik; Tiglio, Manuel
2007-01-01
We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors
Alternative method for reconstruction of antihydrogen annihilation vertices
Amole, C; Andresen , G B; Baquero-Ruiz, M; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jonsell, S; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki,Y
2012-01-01
The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.
Alternative method for reconstruction of antihydrogen annihilation vertices
Energy Technology Data Exchange (ETDEWEB)
Amole, C., E-mail: chanpreet.amole@cern.ch [York University, Department of Physics and Astronomy (Canada); Ashkezari, M. D. [Simon Fraser University, Department of Physics (Canada); Andresen, G. B. [Aarhus University, Department of Physics and Astronomy (Denmark); Baquero-Ruiz, M. [University of California, Department of Physics (United States); Bertsche, W. [Swansea University, Department of Physics (United Kingdom); Bowe, P. D. [Aarhus University, Department of Physics and Astronomy (Denmark); Butler, E. [CERN, Physics Department (Switzerland); Cesar, C. L. [Universidade Federal do Rio de Janeiro, Instituto de Fisica (Brazil); Chapman, S. [University of California, Department of Physics (United States); Charlton, M.; Deller, A.; Eriksson, S. [Swansea University, Department of Physics (United Kingdom); Fajans, J. [University of California, Department of Physics (United States); Friesen, T.; Fujiwara, M. C. [University of Calgary, Department of Physics and Astronomy (Canada); Gill, D. R. [TRIUMF (Canada); Gutierrez, A. [University of British Columbia, Department of Physics and Astronomy (Canada); Hangst, J. S. [Aarhus University, Department of Physics and Astronomy (Denmark); Hardy, W. N. [University of British Columbia, Department of Physics and Astronomy (Canada); Hayano, R. S. [University of Tokyo, Department of Physics (Japan); Collaboration: ALPHA Collaboration; and others
2012-12-15
The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.
Reconstruction methods for phase-contrast tomography
Energy Technology Data Exchange (ETDEWEB)
Raven, C.
1997-02-01
Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.
Filter-based reconstruction methods for tomography
Pelt, D.M.
2016-01-01
In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally
Comparison of Force Reconstruction Methods for a Lumped Mass Beam
Directory of Open Access Journals (Sweden)
Vesta I. Bateman
1997-01-01
Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.
Choosing the best ancestral character state reconstruction method.
Royer-Carenzi, Manuela; Pontarotti, Pierre; Didier, Gilles
2013-03-01
Despite its intrinsic difficulty, ancestral character state reconstruction is an essential tool for testing evolutionary hypothesis. Two major classes of approaches to this question can be distinguished: parsimony- or likelihood-based approaches. We focus here on the second class of methods, more specifically on approaches based on continuous-time Markov modeling of character evolution. Among them, we consider the most-likely-ancestor reconstruction, the posterior-probability reconstruction, the likelihood-ratio method, and the Bayesian approach. We discuss and compare the above-mentioned methods over several phylogenetic trees, adding the maximum-parsimony method performance in the comparison. Under the assumption that the character evolves according a continuous-time Markov process, we compute and compare the expectations of success of each method for a broad range of model parameter values. Moreover, we show how the knowledge of the evolution model parameters allows to compute upper bounds of reconstruction performances, which are provided as references. The results of all these reconstruction methods are quite close one to another, and the expectations of success are not so far from their theoretical upper bounds. But the performance ranking heavily depends on the topology of the studied tree, on the ancestral node that is to be inferred and on the parameter values. Consequently, we propose a protocol providing for each parameter value the best method in terms of expectation of success, with regard to the phylogenetic tree and the ancestral node to infer. Copyright © 2012 Elsevier Inc. All rights reserved.
Assessing the accuracy of ancestral protein reconstruction methods.
Directory of Open Access Journals (Sweden)
Paul D Williams
2006-06-01
Full Text Available The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.
Assessing the accuracy of ancestral protein reconstruction methods.
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-06-23
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.
Image reconstruction in computerized tomography using the convolution method
International Nuclear Information System (INIS)
Oliveira Rebelo, A.M. de.
1984-03-01
In the present work an algoritin was derived, using the analytical convolution method (filtered back-projection) for two-dimensional or three-dimensional image reconstruction in computerized tomography applied to non-destructive testing and to the medical use. This mathematical model is based on the analytical Fourier transform method for image reconstruction. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object under study of a colimated gamma ray beam has been determined for various positions and incidence angles (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function W ij which was used for simulated tests. Simulated tests using standard objects with attenuation coefficients in the range of 0,2 to 0,7 cm -1 were carried out using cell arrays of up to 25x25. One application was carried out in the medical area simulating image reconstruction of an arm phantom with attenuation coefficients in the range of 0,2 to 0,5 cm -1 using cell arrays of 41x41. The simulated results show that, in objects with a great number of interfaces and great variations of attenuation coefficients at these interfaces, a good reconstruction is obtained with the number of projections equal to the reconstruction matrix dimension. A good reconstruction is otherwise obtained with fewer projections. (author) [pt
A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy
Directory of Open Access Journals (Sweden)
Oktay Büyükaşık
2010-12-01
Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31
Discontinuous Galerkin method for computing gravitational waveforms from extreme mass ratio binaries
International Nuclear Information System (INIS)
Field, Scott E; Hesthaven, Jan S; Lau, Stephen R
2009-01-01
Gravitational wave emission from extreme mass ratio binaries (EMRBs) should be detectable by the joint NASA-ESA LISA project, spurring interest in analytical and numerical methods for investigating EMRBs. We describe a discontinuous Galerkin (dG) method for solving the distributionally forced 1+1 wave equations which arise when modeling EMRBs via the perturbation theory of Schwarzschild black holes. Despite the presence of jump discontinuities in the relevant polar and axial gravitational 'master functions', our dG method achieves global spectral accuracy, provided that we know the instantaneous position, velocity and acceleration of the small particle. Here these variables are known, since we assume that the particle follows a timelike geodesic of the Schwarzschild geometry. We document the results of several numerical experiments testing our method, and in our concluding section discuss the possible inclusion of gravitational self-force effects.
Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method
Directory of Open Access Journals (Sweden)
Mardiansyah Ahmad Zafrullah
2018-01-01
Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.
Matrix-based image reconstruction methods for tomography
International Nuclear Information System (INIS)
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures
COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION
Directory of Open Access Journals (Sweden)
I. A. Shevkunov
2015-01-01
Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.
Harmonic arbitrary waveform generator
Roberts, Brock Franklin
2017-11-28
High frequency arbitrary waveforms have applications in radar, communications, medical imaging, therapy, electronic warfare, and charged particle acceleration and control. State of the art arbitrary waveform generators are limited in the frequency they can operate by the speed of the Digital to Analog converters that directly create their arbitrary waveforms. The architecture of the Harmonic Arbitrary Waveform Generator allows the phase and amplitude of the high frequency content of waveforms to be controlled without taxing the Digital to Analog converters that control them. The Harmonic Arbitrary Waveform Generator converts a high frequency input, into a precision, adjustable, high frequency arbitrary waveform.
Probability Density Function Method for Observing Reconstructed Attractor Structure
Institute of Scientific and Technical Information of China (English)
陆宏伟; 陈亚珠; 卫青
2004-01-01
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
Least Squares Methods for Equidistant Tree Reconstruction
Fahey, Conor; Hosten, Serkan; Krieger, Nathan; Timpe, Leslie
2008-01-01
UPGMA is a heuristic method identifying the least squares equidistant phylogenetic tree given empirical distance data among $n$ taxa. We study this classic algorithm using the geometry of the space of all equidistant trees with $n$ leaves, also known as the Bergman complex of the graphical matroid for the complete graph $K_n$. We show that UPGMA performs an orthogonal projection of the data onto a maximal cell of the Bergman complex. We also show that the equidistant tree with the least (Eucl...
Improving waveform inversion using modified interferometric imaging condition
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2018-02-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
Energy Technology Data Exchange (ETDEWEB)
Hirayama, A; Eguchi, T [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)
1996-04-10
Among the numerical calculation methods for steady-state wave-making problems, the panel shift Rankine source (PSRS) method has the advantages of rather precise determination of wave pattern of practical ship types, and short calculation period. The wave pattern around the hull was calculated by means of the PSRS method. The waveform analysis was carried out for the wave, to obtain an amplitude function of the original ship type. Based on the amplitude function, a ship type improvement method aiming at the optimization of ship type was provided using a conditional calculus of variation. A Series 60 (Cb=0.6) ship type was selected for the ship type improvement, to apply this technique. It was suggested that optimum design can be made for reducing the wave making resistance by means of this method. For the improvement of Series 60 ship type using this method, a great degree of reduction in the wave making resistance was recognized from the results of numerical waveform analysis. It was suggested that the ship type improvement aiming at the reduction of wave-making resistance can be made in shorter period and by smaller labor compared with the method using a waveform analysis of cistern tests. 5 refs., 9 figs.
Comparison of four surgical methods for eyebrow reconstruction
Directory of Open Access Journals (Sweden)
Omranifard Mahmood
2007-01-01
Full Text Available Background: The eyebrow plays an important role in facial harmony and eye protection. Eyebrows can be injured by burn, trauma, tumour, tattooing and alopecia. Eyebrow reconstructions have been done via several techniques. Here, our experience with a fairly new method for eyebrow reconstruction is presented. Materials and Methods: This is a descriptive-analytical study which was done on 76 patients at the Al-Zahra and Imam Mousa Kazem hospitals at Isfahan University of Medical University, Isfahan, Iran, from 1994 to 2004. Totally 86 eyebrows were reconstructed. All patients were examined before and after the operation. Methods which are commonly applied in eyebrow reconstruction are as follows: 1. Superficial Temporal Artery Flap (Island, 2. Interpolitation Scalp Flap, 3. Graft. Our method which is named Forehead Facial Island Flap with inferior pedicle provides an easier approach for the surgeon and more ideal hair growth direction for the patient. Results: Significantly lower rates of complication along with greater patient satisfaction were obtained with Forehead Facial Island Flap. Conclusions: According to the acquired results, this method seems to be more technically practical and aesthetically favourable when compared to others.
Reconstruction of CT images by the Bayes- back projection method
Haruyama, M; Takase, M; Tobita, H
2002-01-01
In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...
A Total Variation-Based Reconstruction Method for Dynamic MRI
Directory of Open Access Journals (Sweden)
Germana Landi
2008-01-01
Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.
Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.
Benazzi, Stefano; Senck, Sascha
2011-04-01
In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
DG TOMO: A new method for tomographic reconstruction
International Nuclear Information System (INIS)
Freitas, D. de; Feschet, F.; Cachin, F.; Geissler, B.; Bapt, A.; Karidioula, I.; Martin, C.; Kelly, A.; Mestas, D.; Gerard, Y.; Reveilles, J.P.; Maublant, J.
2006-01-01
Aim: FBP and OSEM are the most popular tomographic reconstruction methods in scintigraphy. FBP is a simple method but artifacts of reconstruction are generated which corrections induce degradation of the spatial resolution. OSEM takes account of statistical fluctuations but noise strongly increases after a certain number of iterations. We compare a new method of tomographic reconstruction based on discrete geometry (DG TOMO) to FBP and OSEM. Materials and methods: Acquisitions were performed on a three-head gamma-camera (Philips) with a NEMA Phantom containing six spheres of sizes from 10 to 37 mm inner diameter, filled with around 325 MBq/l of technetium-99 m. The spheres were positioned in water containing 3 MBq/l of technetium-99 m. Acquisitions were realized during a 180 o -rotation around the phantom by 25-s steps. DG TOMO has been developed in our laboratory in order to minimize the number of projections at acquisition. Two tomographic reconstructions utilizing 32 and 16 projections with FBP, OSEM and DG TOMO were performed and transverse slices were compared. Results: FBP with 32 projections detects only the activity in the three largest spheres (diameter ≥22 mm). With 16 projections, the star effect is predominant and the contrast of the third sphere is very low. OSEM with 32 projections provides a better image but the three smallest spheres (diameter ≤17 mm) are difficult to distinguish. With 16 projections, the three smaller spheres are not detectable. The results of DG TOMO are similar to OSEM. Conclusion: Since the parameters of DG TOMO can be further optimized, this method appears as a promising alternative for tomoscintigraphy reconstruction
Parallel MR image reconstruction using augmented Lagrangian methods.
Ramani, Sathish; Fessler, Jeffrey A
2011-03-01
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.
Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2010-02-01
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm
Methods for reconstruction of the density distribution of nuclear power
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2015-01-01
Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and
Quartet-based methods to reconstruct phylogenetic networks.
Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng
2014-02-20
Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.
Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection
Directory of Open Access Journals (Sweden)
Jie Zhao
2014-01-01
Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.
2014-11-01
Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ).
International Nuclear Information System (INIS)
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.
2014-01-01
Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ)
Energy Technology Data Exchange (ETDEWEB)
Raczyński, L., E-mail: lech.raczynski@ncbj.gov.pl [Świerk Computing Centre, National Centre for Nuclear Research, 05-400 Otwock-Świerk (Poland); Moskal, P. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Kowalski, P.; Wiślicki, W. [Świerk Computing Centre, National Centre for Nuclear Research, 05-400 Otwock-Świerk (Poland); Bednarski, T.; Białas, P.; Czerwiński, E. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Kapłon, Ł. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Institute of Metallurgy and Materials Science of Polish Academy of Sciences, Cracow (Poland); Kochanowski, A. [Faculty of Chemistry, Jagiellonian University, 30-060 Cracow (Poland); Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Molenda, M. [Faculty of Chemistry, Jagiellonian University, 30-060 Cracow (Poland); Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); and others
2014-11-11
Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ)
Centered Differential Waveform Inversion with Minimum Support Regularization
Kazei, Vladimir
2017-05-26
Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.
The gridding method for image reconstruction by Fourier transformation
International Nuclear Information System (INIS)
Schomberg, H.; Timmer, J.
1995-01-01
This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform
Total variation superiorized conjugate gradient method for image reconstruction
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
Programmable waveform controller
International Nuclear Information System (INIS)
Yeh, H.T.
1979-01-01
A programmable waveform controller (PWC) was developed for voltage waveform generation in the laboratory. It is based on the Intel 8080 family of chips. The hardware uses the modular board approach, sharing a common 44-pin bus. The software contains two separate programs: the first generates a single connected linear ramp waveform and is capable of bipolar operation, linear interpolation between input data points, extended time range, and cycling; the second generates four independent square waveforms with variable duration and amplitude
Filtering of SPECT reconstructions made using Bellini's attenuation correction method
International Nuclear Information System (INIS)
Glick, S.J.; Penney, B.C.; King, M.A.
1991-01-01
This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing
Improving automated 3D reconstruction methods via vision metrology
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Image reconstruction methods for the PBX-M pinhole camera
International Nuclear Information System (INIS)
Holland, A.; Powell, E.T.; Fonck, R.J.
1990-03-01
This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs
A two-way regularization method for MEG source reconstruction
Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin
2012-01-01
The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.
A two-way regularization method for MEG source reconstruction
Tian, Tian Siva
2012-09-01
The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.
Czech Academy of Sciences Publication Activity Database
Hrubcová, Pavla; Vavryčuk, Václav; Boušková, Alena; Bohnhoff, M.
2016-01-01
Roč. 121, č. 2 (2016), s. 881-902 ISSN 2169-9313 R&D Projects: GA ČR GA13-08971S; GA MŠk LM2010008; GA ČR(CZ) GAP210/12/1491; GA ČR(CZ) GC16-19751J Institutional support: RVO:67985530 Keywords : crustal structure * waveform stacking * microearthquakes Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 3.350, year: 2016
Sediment core and glacial environment reconstruction - a method review
Bakke, Jostein; Paasche, Øyvind
2010-05-01
Alpine glaciers are often located in remote and high-altitude regions of the world, areas that only rarely are covered by instrumental records. Reconstructions of glaciers has therefore proven useful for understanding past climate dynamics on both shorter and longer time-scales. One major drawback with glacier reconstructions based solely on moraine chronologies - by far the most common -, is that due to selective preservation of moraine ridges such records do not exclude the possibility of multiple Holocene glacier advances. This problem is true regardless whether cosmogenic isotopes or lichenometry have been used to date the moraines, or also radiocarbon dating of mega-fossils buried in till or underneath the moraines themselves. To overcome this problem Karlén (1976) initially suggested that glacial erosion and the associated production of rock-flour deposited in downstream lakes could provide a continuous record of glacial fluctuations, hence overcoming the problem of incomplete reconstructions. We want to discuss the methods used to reconstruct past glacier activity based on sediments deposited in distal glacier-fed lakes. By quantifying physical properties of glacial and extra-glacial sediments deposited in catchments, and in downstream lakes and fjords, it is possible to isolate and identify past glacier activity - size and production rate - that subsequently can be used to reconstruct changing environmental shifts and trends. Changes in average sediment evacuation from alpine glaciers are mainly governed by glacier size and the mass turnover gradient, determining the deformation rate at any given time. The amount of solid precipitation (mainly winter accumulation) versus loss due to melting during the ablation-season (mainly summer temperature) determines the mass turnover gradient in either positive or negative direction. A prevailing positive net balance will lead to higher sedimentation rates and vice versa, which in turn can be recorded in downstream
Skin sparing mastectomy: Technique and suggested methods of reconstruction
International Nuclear Information System (INIS)
Farahat, A.M.; Hashim, T.; Soliman, H.O.; Manie, T.M.; Soliman, O.M.
2014-01-01
To demonstrate the feasibility and accessibility of performing adequate mastectomy to extirpate the breast tissue, along with en-block formal axillary dissection performed from within the same incision. We also compared different methods of immediate breast reconstruction used to fill the skin envelope to achieve the best aesthetic results. Methods: 38 patients with breast cancer underwent skin-sparing mastectomy with formal axillary clearance, through a circum-areolar incision. Immediate breast reconstruction was performed using different techniques to fill in the skin envelope. Two reconstruction groups were assigned; group 1: Autologus tissue transfer only (n= 24), and group 2: implant augmentation (n= 14). Autologus tissue transfer: The techniques used included filling in the skin envelope using Extended Latissimus Dorsi flap (18 patients) and Pedicled TRAM flap (6 patients). Augmentation with implants: Subpectoral implants(4 patients), a rounded implant placed under the pectoralis major muscle to augment an LD reconstructed breast. LD pocket (10 patients), an anatomical implant placed over the pectoralis major muscle within a pocket created by the LD flap. No contra-lateral procedure was performed in any of the cases to achieve symmetry. Results: All cases underwent adequate excision of the breast tissue along with en-block complete axillary clearance (when indicated), without the need for an additional axillary incision. Eighteen patients underwent reconstruction using extended LD flaps only, six had TRAM flaps, four had augmentation using implants placed below the pectoralis muscle along with LD flaps, and ten had implants placed within the LD pocket. Breast shape, volume and contour were successfully restored in all patients. Adequate degree of ptosis was achieved, to ensure maximal symmetry. Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian
Use of regularized algebraic methods in tomographic reconstruction
International Nuclear Information System (INIS)
Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.
1997-01-01
The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable
A New Method for Coronal Magnetic Field Reconstruction
Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung
2017-08-01
A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between
Reconstruction of Banknote Fragments Based on Keypoint Matching Method.
Gwo, Chih-Ying; Wei, Chia-Hung; Li, Yue; Chiu, Nan-Hsing
2015-07-01
Banknotes may be shredded by a scrap machine, ripped up by hand, or damaged in accidents. This study proposes an image registration method for reconstruction of multiple sheets of banknotes. The proposed method first constructs different scale spaces to identify keypoints in the underlying banknote fragments. Next, the features of those keypoints are extracted to represent their local patterns around keypoints. Then, similarity is computed to find the keypoint pairs between the fragment and the reference banknote. The banknote fragments can determine the coordinate and amend the orientation. Finally, an assembly strategy is proposed to piece multiple sheets of banknote fragments together. Experimental results show that the proposed method causes, on average, a deviation of 0.12457 ± 0.12810° for each fragment while the SIFT method deviates 1.16893 ± 2.35254° on average. The proposed method not only reconstructs the banknotes but also decreases the computing cost. Furthermore, the proposed method can estimate relatively precisely the orientation of the banknote fragments to assemble. © 2015 American Academy of Forensic Sciences.
Track and vertex reconstruction: From classical to adaptive methods
International Nuclear Information System (INIS)
Strandlie, Are; Fruehwirth, Rudolf
2010-01-01
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
A Robust Shape Reconstruction Method for Facial Feature Point Detection
Directory of Open Access Journals (Sweden)
Shuqiu Tan
2017-01-01
Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Waveform inversion for acoustic VTI media in frequency domain
Wu, Zedong; Alkhalifah, Tariq Ali
2016-01-01
Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI) by inverting for the background model using a single scattered wavefield from an inverted perturbation. However, current
Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.
Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan
2010-12-01
Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.
Optical wedge method for spatial reconstruction of particle trajectories
International Nuclear Information System (INIS)
Asatiani, T.L.; Alchudzhyan, S.V.; Gazaryan, K.A.; Zograbyan, D.Sh.; Kozliner, L.I.; Krishchyan, V.M.; Martirosyan, G.S.; Ter-Antonyan, S.V.
1978-01-01
A technique of optical wedges allowing the full reconstruction of pictures of events in space is considered. The technique is used for the detection of particle tracks in optical wide-gap spark chambers by photographing in one projection. The optical wedges are refracting right-angle plastic prisms positioned between the camera and the spark chamber so that through them both ends of the track are photographed. A method for calibrating measurements is given, and an estimate made of the accuracy of the determination of the second projection with the help of the optical wedges
An Optimized Method for Terrain Reconstruction Based on Descent Images
Directory of Open Access Journals (Sweden)
Xu Xinchao
2016-02-01
Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.
Extension of local front reconstruction method with controlled coalescence model
Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.
2018-02-01
The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.
Computational methods for three-dimensional microscopy reconstruction
Frank, Joachim
2014-01-01
Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology. Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.
Tohyama, Mikio
2015-01-01
What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...
Li, W.; Cui, Q.; Gao, Y.; Wei, R.; Zhou, Y.; Yu, J.
2017-12-01
The 410 km discontinuity is the upper boundary of the mantle transition zone. Seismic detections on the structure and morphology of the 410 km discontinuity are helpful to understand the compositions of the Earth's interior and the relevant geodynamics. In this paper, we select the broadband P waveforms of an intermediate earthquake that occurred in the Ryukyu subduction zone and retrieved from the China Digital Seismograph Network, and study the fine velocity structure around the 410 km discontinuity by matching the observed triplicated waveforms with the theoretical ones. Our results reveal that (1) the 410 km discontinuity beneath the East China Sea is mostly a sharp boundary with a small-scale uplift of 8-15 km and a gradient boundary up to 20 km in the most southern part, and (2) there exist a low velocity layer atop the 410 km discontinuity with the thickness of 50-62 km and P-wave velocity decrease of 0.5%-1.5%, and (3) a high velocity anomaly with P-wave decrease of 1.0%-3.0% below 440 km. Combining with the previous topographic results in this area, we speculate that the high velocity anomaly is relevant to the stagnancy of the western Pacific slab in the mantle transition zone, the decomposition of phase E in the slab results in the increase of water content, which would cause the uplift of the 410 km discontinuity, and the low velocity layer atop the discontinuity should be related to the partial melting of the mantle peridotite induced by the dehydration of the hydrous minerals.
Iterative reconstruction methods for Thermo-acoustic Tomography
International Nuclear Information System (INIS)
Marinesque, Sebastien
2012-01-01
We define, study and implement various iterative reconstruction methods for Thermo-acoustic Tomography (TAT): the Back and Forth Nudging (BFN), easy to implement and to use, a variational technique (VT) and the Back and Forth SEEK (BF-SEEK), more sophisticated, and a coupling method between Kalman filter (KF) and Time Reversal (TR). A unified formulation is explained for the sequential techniques aforementioned that defines a new class of inverse problem methods: the Back and Forth Filters (BFF). In addition to existence and uniqueness (particularly for backward solutions), we study many frameworks that ensure and characterize the convergence of the algorithms. Thus we give a general theoretical framework for which the BFN is a well-posed problem. Then, in application to TAT, existence and uniqueness of its solutions and geometrical convergence of the algorithm are proved, and an explicit convergence rate and a description of its numerical behaviour are given. Next, theoretical and numerical studies of more general and realistic framework are led, namely different objects, speeds (with or without trapping), various sensor configurations and samplings, attenuated equations or external sources. Then optimal control and best estimate tools are used to characterize the BFN convergence and converging feedbacks for BFF, under observability assumptions. Finally, we compare the most flexible and efficient current techniques (TR and an iterative variant) with our various BFF and the VT in several experiments. Thus, robust, with different possible complexities and flexible, the methods that we propose are very interesting reconstruction techniques, particularly in TAT and when observations are degraded. (author) [fr
A new method for depth profiling reconstruction in confocal microscopy
Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe
2018-05-01
Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.
Retrieving rupture history using waveform inversions in time sequence
Yi, L.; Xu, C.; Zhang, X.
2017-12-01
The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.
Features of the method of large-scale paleolandscape reconstructions
Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina
2017-04-01
The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the
MO-DE-209-02: Tomosynthesis Reconstruction Methods
International Nuclear Information System (INIS)
Mainprize, J.
2016-01-01
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support
MO-DE-209-02: Tomosynthesis Reconstruction Methods
Energy Technology Data Exchange (ETDEWEB)
Mainprize, J. [Sunnybrook Health Sciences Centre, Toronto, ON (Canada)
2016-06-15
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support
A novel mechanochemical method for reconstructing the moisture-degraded HKUST-1.
Sun, Xuejiao; Li, Hao; Li, Yujie; Xu, Feng; Xiao, Jing; Xia, Qibin; Li, Yingwei; Li, Zhong
2015-07-11
A novel mechanochemical method was proposed to reconstruct quickly moisture-degraded HKUST-1. The degraded HKUST-1 can be restored within minutes. The reconstructed samples were characterized, and confirmed to have 95% surface area and 92% benzene capacity of the fresh HKUST-1. It is a simple and effective strategy for degraded MOF reconstruction.
Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi
2016-02-01
The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.
Blackman, Jonathan; Field, Scott; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel
2015-04-01
With the advanced detector era just around the corner, there is a strong need for fast and accurate models of gravitational waveforms from compact binary coalescence. Fast surrogate models can be built out of an accurate but slow waveform model with minimal to no loss in accuracy, but may require a large number of evaluations of the underlying model. This may be prohibitively expensive if the underlying is extremely slow, for example if we wish to build a surrogate for numerical relativity. We examine alternate choices to building surrogate models which allow for a more sparse set of input waveforms. Research supported in part by NSERC.
Efficient parsimony-based methods for phylogenetic network reconstruction.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2007-01-15
Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.
Revisiting a model-independent dark energy reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
Multifunction waveform generator for EM receiver testing
Chen, Kai; Jin, Sheng; Deng, Ming
2018-01-01
In many electromagnetic (EM) methods - such as magnetotelluric, spectral-induced polarization (SIP), time-domain-induced polarization (TDIP), and controlled-source audio magnetotelluric (CSAMT) methods - it is important to evaluate and test the EM receivers during their development stage. To assess the performance of the developed EM receivers, controlled synthetic data that simulate the observed signals in different modes are required. In CSAMT and SIP mode testing, the waveform generator should use the GPS time as the reference for repeating schedule. Based on our testing, the frequency range, frequency precision, and time synchronization of the currently available function waveform generators on the market are deficient. This paper presents a multifunction waveform generator with three waveforms: (1) a wideband, low-noise electromagnetic field signal to be used for magnetotelluric, audio-magnetotelluric, and long-period magnetotelluric studies; (2) a repeating frequency sweep square waveform for CSAMT and SIP studies; and (3) a positive-zero-negative-zero signal that contains primary and secondary fields for TDIP studies. In this paper, we provide the principles of the above three waveforms along with a hardware design for the generator. Furthermore, testing of the EM receiver was conducted with the waveform generator, and the results of the experiment were compared with those calculated from the simulation and theory in the frequency band of interest.
Sparse Frequency Waveform Design for Radar-Embedded Communication
Directory of Open Access Journals (Sweden)
Chaoyun Mai
2016-01-01
Full Text Available According to the Tag application with function of covert communication, a method for sparse frequency waveform design based on radar-embedded communication is proposed. Firstly, sparse frequency waveforms are designed based on power spectral density fitting and quasi-Newton method. Secondly, the eigenvalue decomposition of the sparse frequency waveform sequence is used to get the dominant space. Finally the communication waveforms are designed through the projection of orthogonal pseudorandom vectors in the vertical subspace. Compared with the linear frequency modulation waveform, the sparse frequency waveform can further improve the bandwidth occupation of communication signals, thus achieving higher communication rate. A certain correlation exists between the reciprocally orthogonal communication signals samples and the sparse frequency waveform, which guarantees the low SER (signal error rate and LPI (low probability of intercept. The simulation results verify the effectiveness of this method.
Wavelet-Based Signal Processing of Electromagnetic Pulse Generated Waveforms
National Research Council Canada - National Science Library
Ardolino, Richard S
2007-01-01
This thesis investigated and compared alternative signal processing techniques that used wavelet-based methods instead of traditional frequency domain methods for processing measured electromagnetic pulse (EMP) waveforms...
Benchmarking burnup reconstruction methods for dynamically operated research reactors
Energy Technology Data Exchange (ETDEWEB)
Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include ^{148}Nd, ^{137}Cs+^{137}Ba, ^{139}La, and ^{145}Nd+^{146}Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.
International Nuclear Information System (INIS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-01-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)
Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography
DEFF Research Database (Denmark)
Hoffmann, Kristoffer; Knudsen, Kim
2014-01-01
For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)
2010-09-21
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
International Nuclear Information System (INIS)
Pereira, N F; Sitek, A
2010-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure
Directory of Open Access Journals (Sweden)
Hesheng Zhang
2016-01-01
Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.
Zhang, Dongliang
2013-01-01
To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.
Linogram and other direct Fourier methods for tomographic reconstruction
International Nuclear Information System (INIS)
Magnusson, M.
1993-01-01
Computed tomography (CT) is an outstanding break-through in technology as well as in medical diagnostics. The aim in CT is to produce an image with good image quality as fast as possible. The two most well-known methods for CT-reconstruction are the Direct Fourier Method (DFM) and the Filtered Backprojection Method (FBM). This thesis is divided in four parts. In part 1 we give an introduction to the principles of CT as well as a basic treatise of the DFM and the FBM. We also present a short CT history as well as brief descriptions of techniques related to X-ray CT such as SPECT, PET and MRI. Part 2 is devoted to the Linogram Method (LM). The method is presented both intuitively and rigorously and a complete algorithm is given for the discrete case. The implementation has been done using the SNARK subroutine package with various parameters and phantom images. For comparison, the FBM has been applied to the same input projection data. The experiments show that the LM gives almost the same image quality, pixel for pixel, as the FBM. In part 3 we show that the LM is a close relative to the common DFM. We give a new extended explanation of artifacts in DFMs. The source of the problem is twofold: interpolation errors and circular convolution. By identifying the second effect as distinct from the first one, we are able to suggest and verify remedies for the DFM which brings the image quality on par with FBM. One of these remedies is the LM. A slight difficulty with both LM and ordinary DFM techniques is that they require a special projection geometry, whereas most commercial CT-scanners provides fan beam projection data. However, the wanted linogram projection data can be interpolated from fan beam projection data. In part 4, we show that it is possible to obtain good image quality with both LM and DFM techniques using fan beam projection indata. The thesis concludes that the computation cost can be essentially decreased by using LM or other DFMs instead of FBM
A simulation of portable PET with a new geometric image reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456 8611 (Japan): Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474 8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644 0023 (Japan)
2006-12-20
A new method is proposed for three-dimensional positron emission tomography image reconstruction. The method uses the elementary geometric property of line of response whereby two lines of response, which originate from radioactive isotopes in the same position, lie within a few millimeters distance of each other. The method differs from the filtered back projection method and the iterative reconstruction method. The method is applied to a simulation of portable positron emission tomography.
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Accelerated gradient methods for total-variation-based CT image reconstruction
Energy Technology Data Exchange (ETDEWEB)
Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology
2011-07-01
Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)
A combined reconstruction-classification method for diffuse optical tomography
Energy Technology Data Exchange (ETDEWEB)
Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
New method to analyze internal disruptions with tomographic reconstructions
Energy Technology Data Exchange (ETDEWEB)
Tanzi, C.P. [EURATOM-FOM Association, FOM-Instituut voor Plasmafysica Rijnhuizen, P.O. BOX 1207, 3430 BE Nieuwegein (The Netherlands); de Blank, H.J. [Max-Planck-Institut fuer Plasmaphysik, EURATOM-IPP Association, 85740 Garching (Germany)
1997-03-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo {ital et al.}, {ital Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research}, W{umlt u}rzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75{percent} of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. {copyright} {ital 1997 American Institute of Physics.}
New method to analyze internal disruptions with tomographic reconstructions
International Nuclear Information System (INIS)
Tanzi, C.P.; de Blank, H.J.
1997-01-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo et al., Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research, Wuerzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75% of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. copyright 1997 American Institute of Physics
Workflows for Full Waveform Inversions
Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas
2017-04-01
Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.
Energy Technology Data Exchange (ETDEWEB)
Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)
2015-01-15
Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.
Jiang, Hongzhen; Liu, Xu; Liu, Yong; Li, Dong; Chen, Zhu; Zheng, Fanglan; Yu, Deqiang
2017-10-01
An effective approach for reconstructing on-axis lensless Fourier Transform digital hologram by using the screen division method is proposed. Firstly, the on-axis Fourier Transform digital hologram is divided into sub-holograms. Then the reconstruction result of every sub-hologram is obtained according to the position of corresponding sub-hologram in the hologram reconstruction plane with Fourier transform operation. Finally, the reconstruction image of on-axis Fourier Transform digital hologram can be acquired by the superposition of the reconstruction result of sub-holograms. Compared with the traditional reconstruction method with the phase shifting technology, in which multiple digital holograms are required to record for obtaining the reconstruction image, this method can obtain the reconstruction image with only one digital hologram and therefore greatly simplify the recording and reconstruction process of on-axis lensless Fourier Transform digital holography. The effectiveness of the proposed method is well proved with the experimental results and it will have potential application foreground in the holographic measurement and display field.
Reconstruction method for data protection in telemedicine systems
Buldakova, T. I.; Suyatinov, S. I.
2015-03-01
In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.
Compressive full waveform lidar
Yang, Weiyi; Ke, Jun
2017-05-01
To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.
Seismic waveform classification using deep learning
Kong, Q.; Allen, R. M.
2017-12-01
MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.
Directory of Open Access Journals (Sweden)
Wouter D Weeda
Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.
Extension of frequency-based dissimilarity for retrieving similar plasma waveforms
International Nuclear Information System (INIS)
Hochin, Teruhisa; Koyama, Katsumasa; Nakanishi, Hideya; Kojima, Mamoru
2008-01-01
Some computer-aided assistance in finding the waveforms similar to a waveform has become indispensable for accelerating data analysis in the plasma experiments. For the slowly-varying waveforms and those having time-sectional oscillation patterns, the methods using the Fourier series coefficients of waveforms in calculating the dissimilarity have successfully improved the performance in retrieving similar waveforms. This paper treats severely-varying waveforms, and proposes two extensions to the dissimilarity of waveforms. The first extension is to capture the difference of the importance of the Fourier series coefficients of waveforms against frequency. The second extension is to consider the outlines of waveforms. The correctness of the extended dissimilarity is experimentally evaluated by using the metrics used in evaluating that of the information retrieval, i.e. precision and recall. The experimental results show that the extended dissimilarity could improve the correctness of the similarity retrieval of plasma waveforms
Simple method of modelling of digital holograms registering and their optical reconstruction
International Nuclear Information System (INIS)
Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G
2016-01-01
The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)
A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images
Sturm , Peter; Maybank , Steve
1999-01-01
International audience; We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.
Skin sparing mastectomy: Technique and suggested methods of reconstruction
Directory of Open Access Journals (Sweden)
Ahmed M. Farahat
2014-09-01
Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through preservation of the skin envelope of the breast when ever indicated. Our patients can benefit from safe surgery and have good cosmetic outcomeby applying different reconstructive techniques.
Deep Learning Methods for Particle Reconstruction in the HGCal
Arzi, Ofir
2017-01-01
The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure \\ref{fig:cms})\\cite{Contardo:2020886}. It's goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.
Deep Learning Methods for Particle Reconstruction in the HGCal
Arzi, Ofir
2017-01-01
The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure 1)[1]. It’s goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identiﬁcation and energy estimation, during my participation in the CERN Summer Students Program.
An automated 3D reconstruction method of UAV images
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
A shape-based quality evaluation and reconstruction method for electrical impedance tomography.
Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen
2015-06-01
Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.
A shape-based quality evaluation and reconstruction method for electrical impedance tomography
International Nuclear Information System (INIS)
Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen; Malmivuo, Jaakko
2015-01-01
Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community.In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed.Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images. (paper)
Full waveform inversion based on scattering angle enrichment with application to real dataset
Wu, Zedong; Alkhalifah, Tariq Ali
2015-01-01
Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity
Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction
Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan
2009-01-01
Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and
Directory of Open Access Journals (Sweden)
Songjun Zeng
2010-01-01
Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.
The feasibility of images reconstructed with the method of sieves
International Nuclear Information System (INIS)
Veklerov, E.; Llacer, J.
1990-01-01
The concept of sieves has been applied with the maximum likelihood estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered
An Overview of Radar Waveform Optimization for Target Detection
Directory of Open Access Journals (Sweden)
Wang Lulu
2016-10-01
Full Text Available An optimal waveform design method that fully employs the knowledge of the target and the environment can further improve target detection performance, thus is of vital importance to research. In this paper, methods of radar waveform optimization for target detection are reviewed and summarized and provide the basis for the research.
International Nuclear Information System (INIS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-01-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Multichannel waveform display system
International Nuclear Information System (INIS)
Kolvankar, V.G.
1989-01-01
For any multichannel data acquisition system, a multichannel paper chart recorder undoubtedly forms an essential part of the system. When deployed on-line, it instantaneously provides, for visual inspection, hard copies of the signal waveforms on common time base at any desired sensitivity and time resolution. Within the country, only a small range of these strip chart recorder s is available, and under stringent specifications imported recorders are often procured. The cost of such recorders may range from 1 to 5 lakhs of rupees in foreign exchange. A system to provide on the oscilloscope a steady display of multichannel waveforms, refreshed from the digital data stored in the memory is developed. The merits and demerits of the display system are compared with that built around a conventional paper chart recorder. Various illustrations of multichannel seismic event data acquired at Gauribidanur seismic array station are also presented. (author). 2 figs
Wavelet analysis of the impedance cardiogram waveforms
Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.
2012-12-01
Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.
Wavelet analysis of the impedance cardiogram waveforms
International Nuclear Information System (INIS)
Podtaev, S; Stepanov, R; Dumler, A; Chugainov, S; Tziberkin, K
2012-01-01
Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt) max ) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.
Krylov subspace acceleration of waveform relaxation
Energy Technology Data Exchange (ETDEWEB)
Lumsdaine, A.; Wu, Deyun [Univ. of Notre Dame, IN (United States)
1996-12-31
Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Resolution analysis in full waveform inversion
Fichtner, A.; Trampert, J.
2011-01-01
We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing
2017-05-01
We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Directory of Open Access Journals (Sweden)
Jing Wang
2013-01-01
Full Text Available The image reconstruction for electrical impedance tomography (EIT mathematically is a typed nonlinear ill-posed inverse problem. In this paper, a novel iteration regularization scheme based on the homotopy perturbation technique, namely, homotopy perturbation inversion method, is applied to investigate the EIT image reconstruction problem. To verify the feasibility and effectiveness, simulations of image reconstruction have been performed in terms of considering different locations, sizes, and numbers of the inclusions, as well as robustness to data noise. Numerical results indicate that this method can overcome the numerical instability and is robust to data noise in the EIT image reconstruction. Moreover, compared with the classical Landweber iteration method, our approach improves the convergence rate. The results are promising.
An External Wire Frame Fixation Method of Skin Grafting for Burn Reconstruction.
Yoshino, Yukiko; Ueda, Hyakuzoh; Ono, Simpei; Ogawa, Rei
2017-06-28
The skin graft is a prevalent reconstructive method for burn injuries. We have been applying external wire frame fixation methods in combination with skin grafts since 1986 and have experienced better outcomes in percentage of successful graft take. The overall purpose of this method was to further secure skin graft adherence to wound beds in hard to stabilize areas. There are also location-specific benefits to this technique such as eliminating the need of tarsorrhaphy in periorbital area, allowing immediate food intake after surgery in perioral area, and performing less invasive fixing methods in digits, and so on. The purpose of this study was to clarify its benefits and applicable locations. We reviewed 22 postburn patients with skin graft reconstructions using the external wire frame method at our institution from December 2012 through September 2016. Details of the surgical technique and individual reports are also discussed. Of the 22 cases, 15 (68%) were split-thickness skin grafts and 7 (32%) were full-thickness skin grafts. Five cases (23%) involved periorbital reconstruction, 5 (23%) involved perioral reconstruction, 2 (9%) involved lower limb reconstruction, and 10 (45%) involved digital reconstruction. Complete (100%) survival of the skin graft was attained in all cases. No signs of complication were observed. With 30 years of experiences all combined, we have summarized fail-proof recommendations to a successful graft survival with an emphasis on the locations of its application.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
WFCatalog: A catalogue for seismological waveform data
Trani, Luca; Koymans, Mathijs; Atkinson, Malcolm; Sleeman, Reinoud; Filgueira, Rosa
2017-09-01
This paper reports advances in seismic waveform description and discovery leading to a new seismological service and presents the key steps in its design, implementation and adoption. This service, named WFCatalog, which stands for waveform catalogue, accommodates features of seismological waveform data. Therefore, it meets the need for seismologists to be able to select waveform data based on seismic waveform features as well as sensor geolocations and temporal specifications. We describe the collaborative design methods and the technical solution showing the central role of seismic feature catalogues in framing the technical and operational delivery of the new service. Also, we provide an overview of the complex environment wherein this endeavour is scoped and the related challenges discussed. As multi-disciplinary, multi-organisational and global collaboration is necessary to address today's challenges, canonical representations can provide a focus for collaboration and conceptual tools for agreeing directions. Such collaborations can be fostered and formalised by rallying intellectual effort into the design of novel scientific catalogues and the services that support them. This work offers an example of the benefits generated by involving cross-disciplinary skills (e.g. data and domain expertise) from the early stages of design, and by sustaining the engagement with the target community throughout the delivery and deployment process.
Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.
Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F
2015-05-01
Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method
On the kinematic reconstruction of deep inelastic scattering at HERA: the Σmethod
International Nuclear Information System (INIS)
Bassler, U.; Bernardi, G.
1994-12-01
We review and compare the reconstruction methods of the inclusive deep inelastic scattering variables used at HERA. We introduce a new prescription, the Sigma (Σ) method, which allows to measure the structure function of the proton F 2 (x, Q 2 ) in a large kinematic domain, and in particular in the low x-low Q 2 region, with small systematic errors and small radiative corrections. A detailed comparison between the Σ method and the other methods is shown. Extensions of the Σ method are presented. The effect of QED radiation on the kinematic reconstruction and on the structure function measurement is discussed. (orig.)
A volume of fluid method based on multidimensional advection and spline interface reconstruction
International Nuclear Information System (INIS)
Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.
2004-01-01
A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps
Accelerated gradient methods for total-variation-based CT image reconstruction
DEFF Research Database (Denmark)
Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian
2011-01-01
incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...... reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states
Directory of Open Access Journals (Sweden)
Grünewald Stefan
2011-01-01
Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction
A two-step Hilbert transform method for 2D image reconstruction
International Nuclear Information System (INIS)
Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D
2004-01-01
The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained
Evaluation of image reconstruction methods for 123I-MIBG-SPECT. A rank-order study
International Nuclear Information System (INIS)
Soederberg, Marcus; Mattsson, Soeren; Oddstig, Jenny; Uusijaervi-Lizana, Helena; Leide-Svegborn, Sigrid; Valind, Sven; Thorsson, Ola; Garpered, Sabine; Prautzsch, Tilmann; Tischenko, Oleg
2012-01-01
Background: There is an opportunity to improve the image quality and lesion detectability in single photon emission computed tomography (SPECT) by choosing an appropriate reconstruction method and optimal parameters for the reconstruction. Purpose: To optimize the use of the Flash 3D reconstruction algorithm in terms of equivalent iteration (EI) number (number of subsets times the number of iterations) and to compare with two recently developed reconstruction algorithms ReSPECT and orthogonal polynomial expansion on disc (OPED) for application on 123 I-metaiodobenzylguanidine (MIBG)-SPECT. Material and Methods: Eleven adult patients underwent SPECT 4 h and 14 patients 24 h after injection of approximately 200 MBq 123 I-MIBG using a Siemens Symbia T6 SPECT/CT. Images were reconstructed from raw data using the Flash 3D algorithm at eight different EI numbers. The images were ranked by three experienced nuclear medicine physicians according to their overall impression of the image quality. The obtained optimal images were then compared in one further visual comparison with images reconstructed using the ReSPECT and OPED algorithms. Results: The optimal EI number for Flash 3D was determined to be 32 for acquisition 4 h and 24 h after injection. The average rank order (best first) for the different reconstructions for acquisition after 4 h was: Flash 3D 32 > ReSPECT > Flash 3D 64 > OPED, and after 24 h: Flash 3D 16 > ReSPECT > Flash 3D 32 > OPED. A fair level of inter-observer agreement concerning optimal EI number and reconstruction algorithm was obtained, which may be explained by the different individual preferences of what is appropriate image quality. Conclusion: Using Siemens Symbia T6 SPECT/CT and specified acquisition parameters, Flash 3D 32 (4 h) and Flash 3D 16 (24 h), followed by ReSPECT, were assessed to be the preferable reconstruction algorithms in visual assessment of 123 I-MIBG images
A simple method to take urethral sutures for neobladder reconstruction and radical prostatectomy
Directory of Open Access Journals (Sweden)
B Satheesan
2007-01-01
Full Text Available For the reconstruction of urethra-vesical anastamosis after radical prostatectomy and for neobladder reconstruction, taking adequate sutures to include the urethral mucosa is vital. Due to the retraction of the urethra and unfriendly pelvis, the process of taking satisfactory urethral sutures may be laborious. Here, we describe a simple method by which we could overcome similar technical problems during surgery using Foley catheter as the guide for the suture.
Directory of Open Access Journals (Sweden)
S Singh
2008-11-01
Full Text Available We describe herein a modified technique for reconstruction of chronic rupture of the quadriceps tendon in a patient with bilateral total knee replacement and distal realignment of the patella. The surgery involved the application of a Dacron graft and the ‘double eights’ technique. The patient achieved satisfactory results after surgery and we believe that this technique of reconstruction offers advantages over other methods.
Comparing and improving reconstruction methods for proxies based on compositional data
Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.
2017-12-01
Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data
Centered Differential Waveform Inversion with Minimum Support Regularization
Kazei, Vladimir; Alkhalifah, Tariq Ali
2017-01-01
Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Analysis of fracture surface of CFRP material by three-dimensional reconstruction methods
International Nuclear Information System (INIS)
Lobo, Raquel M.; Andrade, Arnaldo H.P.
2009-01-01
Fracture surfaces of CFRP (carbon Fiber Reinforced Polymer) materials, used in the nuclear fuel cycle, presents an elevated roughness, mainly due to the fracture mode known as pulling out, that displays pieces of carbon fibers after debonding between fiber and matrix. The fractographic analysis, by bi-dimensional images is deficient for not considering the so important vertical resolution as much as the horizontal resolution. In this case, the knowledge of this heights distribution that occurs during the breaking, can lead to the calculation of the involved energies in the process that would allows a better agreement on the fracture mechanisms of the composite material. An important solution for the material characterization, whose surface presents a high roughness due to the variation in height, is to reconstruct three-dimensionally these fracture surfaces. In this work, the 3D reconstruction was done by two different methods: the variable focus reconstruction, through a stack of images obtained by optical microscopy (OM) and the parallax reconstruction, carried through with images acquired by scanning electron microscopy (SEM). The results of both methods present an elevation map of the reconstructed image that determine the height of the surface pixel by pixel,. The results obtained by the methods of reconstruction for the CFRP surfaces, have been compared with others materials such as aluminum and copper that present a ductile type fracture surface, with lower roughness. (author)
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
AIR Tools II: algebraic iterative reconstruction methods, improved implementation
DEFF Research Database (Denmark)
Hansen, Per Christian; Jørgensen, Jakob Sauer
2017-01-01
with algebraic iterative methods and their convergence properties. The present software is a much expanded and improved version of the package AIR Tools from 2012, based on a new modular design. In addition to improved performance and memory use, we provide more flexible iterative methods, a column-action method...
International Nuclear Information System (INIS)
Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo
2015-01-01
Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity
On-line reconstruction of in-core power distribution by harmonics expansion method
International Nuclear Information System (INIS)
Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping
2011-01-01
Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.
International Nuclear Information System (INIS)
Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.
1983-01-01
The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)
Methods and Simulations of Muon Tomography and Reconstruction
Schreiner, Henry Fredrick, III
This dissertation investigates imaging with cosmic ray muons using scintillator-based portable particle detectors, and covers a variety of the elements required for the detectors to operate and take data, from the detector internal communications and software algorithms to a measurement to allow accurate predictions of the attenuation of physical targets. A discussion of the tracking process for the three layer helical design developed at UT Austin is presented, with details of the data acquisition system, and the highly efficient data format. Upgrades to this system provide a stable system for taking images in harsh or inaccessible environments, such as in a remote jungle in Belize. A Geant4 Monte Carlo simulation was used to develop our understanding of the efficiency of the system, as well as to make predictions for a variety of different targets. The projection process is discussed, with a high-speed algorithm for sweeping a plane through data in near real time, to be used in applications requiring a search through space for target discovery. Several other projections and a foundation of high fidelity 3D reconstructions are covered. A variable binning scheme for rapidly varying statistics over portions of an image plane is also presented and used. A discrepancy in our predictions and the observed attenuation through smaller targets is shown, and it is resolved with a new measurement of low energy spectrum, using a specially designed enclosure to make a series of measurements underwater. This provides a better basis for understanding the images of small amounts of materials, such as for thin cover materials.
Electronics via waveform analysis
Craig, Edwin C
1993-01-01
The author believes that a good basic understanding of electronics can be achieved by detailed visual analyses of the actual voltage waveforms present in selected circuits. The voltage waveforms included in this text were photographed using a 35-rrun camera in an attempt to make the book more attractive. This book is intended for the use of students with a variety of backgrounds. For this reason considerable material has been placed in the Appendix for those students who find it useful. The Appendix includes many basic electricity and electronic concepts as well as mathematical derivations that are not vital to the understanding of the circuit being discussed in the text at that time. Also some derivations might be so long that, if included in the text, it could affect the concentration of the student on the circuit being studied. The author has tried to make the book comprehensive enough so that a student could use it as a self-study course, providing one has access to adequate laboratory equipment.
Guan, Huifeng; Anastasio, Mark A.
2017-03-01
It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.
DEFF Research Database (Denmark)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.
2017-01-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...
Fast multiview three-dimensional reconstruction method using cost volume filtering
Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.
2014-03-01
As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.
Integration and interpolation of sampled waveforms
International Nuclear Information System (INIS)
Stearns, S.D.
1978-01-01
Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed
A low error reconstruction method for confocal holography to determine 3-dimensional properties
Energy Technology Data Exchange (ETDEWEB)
Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)
2012-06-15
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the
A low error reconstruction method for confocal holography to determine 3-dimensional properties
International Nuclear Information System (INIS)
Jacquemin, P.B.; Herring, R.A.
2012-01-01
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary
Koyama, Yasuaki; Wada, Takafumi; Lohman, Brandon D; Takamatsu, Yuka; Matsumoto, Junichi; Fujitani, Shigeki; Taira, Yasuhiko
2013-10-01
The objective of the study is to demonstrate the utility of near-infrared spectroscopy (NIRS) in evaluating chest compression (CC) quality in cardiac arrest (CA) patients as well as determine its prognosis predictive value. We present a nonconsecutive case series of adult patients with CA whose cardiopulmonary resuscitation (CPR) was monitored with NIRS and collected the total hemoglobin concentration change (ΔcHb), the tissue oxygen index (TOI), and the ΔTOI to assess CC quality in a noninvasive fashion. During CPR, ΔcHb displayed waveforms monitor, which we regarded as a surrogate for CC quality. Total hemoglobin concentration change waveforms responded accurately to variations or cessations of CCs. In addition, a TOI greater than 40% measured upon admission appears to be significant in predicting patient's outcome. Of 15 patients, 6 had a TOI greater than 40% measured upon admission, and 67% of the latter were in return of spontaneous circulation after CPR and were found to be significantly different between return of spontaneous circulation and death (P = .047; P < .05). Near-infrared spectroscopy reliably assesses the quality of CCs in patients with CA demonstrated by synchronous waveforms during CPR and possible prognostic predictive value, although further investigation is warranted. © 2013 Elsevier Inc. All rights reserved.
Lower Lip Reconstruction after Tumor Resection; a Single Author's Experience with Various Methods
International Nuclear Information System (INIS)
Rifaat, M.A.
2006-01-01
Background: Squamous cell carcinoma is the most frequently seen malignant tumor of the lower lip The more tissue is lost from the lip after tumor resection, the more challenging is the reconstruction. Many methods have been described, but each has its own advantages and its disadvantages. The author presents through his own clinical experience with lower lip reconstruction at tbe NCI, an evaluation of the commonly practiced techniques. Patients and Methods: Over a 3 year period from May 2002 till May 2005, 17 cases presented at the National Cancer Institute, Cairo University, with lower lip squamous cell carcinoma. The lesions involved various regions of the lower lip excluding the commissures. Following resection, the resulting defects ranged from 1/3 of lip to total lip loss. The age of the patients ranged from 28 to 67 years and they were 13 males and 4 females With regards to the reconstructive procedures used, Karapandzic technique (orbicularis oris myocutaneous flaps) was used in 7 patients, 3 of whom underwent secondary lower lip augmentation with upper lip switch flaps Primary Abbe (Lip switch) nap reconstruction was used in two patients, while 2 other patients were reconstructed with bilateral fan flaps with vermilion reconstruction by mucosal advancement in one case and tongue flap in the other The radial forearm free nap was used only in 2 cases, and direct wound closure was achieved in three cases. All patients were evaluated for early postoperative results emphasizing on flap viability and wound problems and for late results emphasizing on oral continence, microstomia, and aesthetic outcome, in addition to the usual oncological follow-up. Results: All flaps used in this study survived completely including the 2 free flaps. In the early postoperative period, minor wound breakdown occurred in all three cases reconstructed by utilizing adjacent cheek skin flaps, but all wounds healed spontaneously. The latter three cases Involved defects greater than 2
Energy Technology Data Exchange (ETDEWEB)
Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Finite difference applied to the reconstruction method of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2016-01-01
Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.
Direct fourier method reconstruction based on unequally spaced fast fourier transform
International Nuclear Information System (INIS)
Wu Xiaofeng; Zhao Ming; Liu Li
2003-01-01
First, We give an Unequally Spaced Fast Fourier Transform (USFFT) method, which is more exact and theoretically more comprehensible than its former counterpart. Then, with an interesting interpolation scheme, we discusse how to apply USFFT to Direct Fourier Method (DFM) reconstruction of parallel projection data. At last, an emulation experiment result is given. (authors)
Reconstruction of prehistoric plant production and cooking practices by a new isotopic method
Energy Technology Data Exchange (ETDEWEB)
Hastorf, C A [California Univ., Los Angeles (USA). Dept. of Anthropology; DeNiro, M J [California Univ., Los Angeles (USA). Dept. of Earth and Space Sciences
1985-06-06
A new method is presented based on isotopic analysis of burnt organic matter, allowing the characterization of previously unidentifiable plant remains extracted from archaeological contexts. The method is used to reconstruct prehistoric production, preparation and consumption of plant foods, as well as the use of ceramic vessels, in the Upper Mantaro Valley region of the central Peruvian Andes.
Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew
2010-01-01
Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…
System and method for image reconstruction, analysis, and/or de-noising
Laleg-Kirati, Taous-Meriem; Kaisserli, Zineb
2015-01-01
A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter
Robust method for stator current reconstruction from DC link in a ...
African Journals Online (AJOL)
Using the switching signals and dc link current, this paper presents a new algorithm for the reconstruction of stator currents of an inverter-fed, three-phase induction motor drive. Unlike the classical and improved methods available in literature, the proposed method is neither based on pulse width modulation pattern ...
An assessment of particle filtering methods and nudging for climate state reconstructions
S. Dubinkina (Svetlana); H. Goosse
2013-01-01
htmlabstractUsing the climate model of intermediate complexity LOVECLIM in an idealized framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle
Waveform digitizing at 500 MHz
International Nuclear Information System (INIS)
Atiya, M.; Ito, M.; Haggerty, J.; Ng, C.; Sippach, F.W.
1988-01-01
Experiment E787 at Brookhaven National Laboratory is designed to study the decay K + → π + ν/bar /nu// to a sensitivity of 2 /times/ 10 -10 . To achieve acceptable muon rejection it is necessary to couple traditional methods (range/energy/momentum correlation) with observation of the (π + → μ + ν, μ + → e + ν/bar /nu//) decay sequence in scintillator. We report on the design and construction of 200 channels of relatively low cost solid state waveform digitizers. The distinguishing features are: 8 bits dynamic range, 500 MHz sampling, zero suppression on the fly, deep memory (up to .5 msec), and fast readout time (100 μsec for the entire system). We report on data obtained during the February-May 1988 run showing performance of the system for the observation of the above decay. 8 figs
Waveform digitizing at 500 MHz
International Nuclear Information System (INIS)
Atiya, M.; Ito, M.; Haggerty, J.; Ng, C.; Sippach, F.W.
1988-01-01
Experiment E787 at Brookhaven National Laboratory is designed to study the decay K + → π + ν/bar /nu// to a sensitivity of 2 /times/ 10/sup /minus/10/. To achieve acceptable muon rejection it is necessary to couple traditional methods (range/energy/momentum correlation) with observation of the π + → μ + → e + ν/bar /nu// decay sequence in scintillator. We report on the design and construction of over 200 channels of relatively low cost solid state waveform digitizers. The distinguishing features are: 8 bits dynamic range, 500 MHz sampling, zero suppression on the fly, deep memory (up to .5 msec), and fast readout time (100 μsec for the entire system). We report on data obtained during the February--May 1988 run showing performance of the system for the observation of the above decay. 9 figs
Phase microscopy using light-field reconstruction method for cell observation.
Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu
2015-08-01
The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
The e/h method of energy reconstruction for combined calorimeter
International Nuclear Information System (INIS)
Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.
1999-01-01
The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger
Convergence analysis for column-action methods in image reconstruction
DEFF Research Database (Denmark)
Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj
2016-01-01
Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. In this paper we consider the case of noise-free data....... We present a convergence analysis of the column algorithms, we discuss two techniques (loping and flagging) for reducing the work, and we establish some convergence results for methods that utilize these techniques. The performance of the algorithms is illustrated with numerical examples from...
Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.
2010-04-01
Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.
METHOD OF DETERMINING ECONOMICAL EFFICIENCY OF HOUSING STOCK RECONSTRUCTION IN A CITY
Directory of Open Access Journals (Sweden)
Petreneva Ol’ga Vladimirovna
2016-03-01
Full Text Available RECONSTRUCTION IN A CITY The demand in comfortable housing has always been very high. The building density is not the same in different regions and sometimes there is no land for new housing construction, especially in the central regions of cities. Moreover, in many cities cultural and historical centers remain, which create the historical appearance of the city, that’s why new construction is impossible in these regions. Though taking into account the depreciation and obsolescence, the operation life of many buildings come to an end, they fall into disrepair. In these cases there arises a question on the reconstruction of the existing residential, public and industrial buildings. The aim of the reconstruction is bringing the existing worn-out building stock into correspondence with technical, social and sanitary requirements and living standards and conditions. The authors consider the currency and reasons for reconstruction of residential buildings. They attempt to answer the question, what is more economical efficient: new construction or reconstruction of residential buildings. The article offers a method to calculate the efficiency of residential buildings reconstruction.
Gaining insight into food webs reconstructed by the inverse method
Kones, J.; Soetaert, K.E.R.; Van Oevelen, D.; Owino, J.; Mavuti, K.
2006-01-01
The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one
Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method
Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing
2017-01-01
Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120
International Nuclear Information System (INIS)
Zeile, Christian; Maione, Ivan A.
2015-01-01
Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.
Environment-based pin-power reconstruction method for homogeneous core calculations
International Nuclear Information System (INIS)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-01-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
International Nuclear Information System (INIS)
Freytag, D.R.; Haller, G.M.; Kang, H.; Wang, J.
1985-09-01
A Waveform Sampler Module (WSM) for the measurement of signal shapes coming from the multi-hit drift chambers of the SLAC SLC detector is described. The module uses a high speed, high resolution analog storage device (AMU) developed in collaboration between SLAC and Stanford University. The AMU devices together with high speed TTL clocking circuitry are packaged in a hybrid which is also suitable for mounting on the detector. The module is in CAMAC format and provides eight signal channels, each recording signal amplitude versus time in 512 cells at a sampling rate of up to 360 MHz. Data are digitized by a 12-bit ADC with a 1 μs conversion time and stored in an on-board memory accessible through CAMAC
A new optimization approach for source-encoding full-waveform inversion
Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.
2013-01-01
Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform
Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.
Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth
2017-02-01
Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.
Reconstruction of the limit cycles by the delays method
International Nuclear Information System (INIS)
Castillo D, R.; Ortiz V, J.; Calleros M, G.
2003-01-01
The boiling water reactors (BWRs) are designed for usually to operate in a stable-lineal regime. In a limit cycle the behavior of the one system is no lineal-stable. In a BWR, instabilities of nuclear- thermohydraulics nature can take the reactor to a limit cycle. The limit cycles should to be avoided since the oscillations of power can cause thermal fatigue to the fuel and/or shroud. In this work the employment of the delays method is analyzed for its application in the detection of limit cycles in a nuclear power plant. The foundations of the method and it application to power signals to different operation conditions are presented. The analyzed signals are: to steady state, nuclear-thermohydraulic instability, a non linear transitory and, finally, failure of a controller plant . Among the main results it was found that the delays method can be applied to detect limit cycles in the power monitors of the BWR reactors. It was also found that the first zero of the autocorrelation function is an appropriate approach to select the delay in the detection of limit cycles, for the analyzed cases. (Author)
One step linear reconstruction method for continuous wave diffuse optical tomography
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method
Kulchitskii, Yu A
2000-01-01
Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.
International Nuclear Information System (INIS)
Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.
1997-01-01
We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction
Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.
Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques
2012-06-01
The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies. Copyright © 2012 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Directory of Open Access Journals (Sweden)
Feng Zhao
2014-10-01
Full Text Available A method for canopy Fluorescence Spectrum Reconstruction (FSR is proposed in this study, which can be used to retrieve the solar-induced canopy fluorescence spectrum over the whole chlorophyll fluorescence emission region from 640–850 nm. Firstly, the radiance of the solar-induced chlorophyll fluorescence (Fs at five absorption lines of the solar spectrum was retrieved by a Spectral Fitting Method (SFM. The Singular Vector Decomposition (SVD technique was then used to extract three basis spectra from a training dataset simulated by the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes. Finally, these basis spectra were linearly combined to reconstruct the Fs spectrum, and the coefficients of them were determined by Weighted Linear Least Squares (WLLS fitting with the five retrieved Fs values. Results for simulated datasets indicate that the FSR method could accurately reconstruct the Fs spectra from hyperspectral measurements acquired by instruments of high Spectral Resolution (SR and Signal to Noise Ratio (SNR. The FSR method was also applied to an experimental dataset acquired in a diurnal experiment. The diurnal change of the reconstructed Fs spectra shows that the Fs radiance around noon was higher than that in the morning and afternoon, which is consistent with former studies. Finally, the potential and limitations of this method are discussed.
Methods of reconstruction of multi-particle events in the new coordinate-tracking setup
Vorobyev, V. S.; Shutenko, V. V.; Zadeba, E. A.
2018-01-01
At the Unique Scientific Facility NEVOD (MEPhI), a large coordinate-tracking detector based on drift chambers for investigations of muon bundles generated by ultrahigh energy primary cosmic rays is being developed. One of the main characteristics of the bundle is muon multiplicity. Three methods of reconstruction of multiple events were investigated: the sequential search method, method of finding the straight line and method of histograms. The last method determines the number of tracks with the same zenith angle in the event. It is most suitable for the determination of muon multiplicity: because of a large distance to the point of generation of muons, their trajectories are quasiparallel. The paper presents results of application of three reconstruction methods to data from the experiment, and also first results of the detector operation.
Phase derivative method for reconstruction of slightly off-axis digital holograms.
Guo, Cheng-Shan; Wang, Ben-Yi; Sha, Bei; Lu, Yu-Jie; Xu, Ming-Yuan
2014-12-15
A phase derivative (PD) method is proposed for reconstruction of off-axis holograms. In this method, a phase distribution of the tested object wave constrained within 0 to pi radian is firstly worked out by a simple analytical formula; then it is corrected to its right range from -pi to pi according to the sign characteristics of its first-order derivative. A theoretical analysis indicates that this PD method is particularly suitable for reconstruction of slightly off-axis holograms because it only requires the spatial frequency of the reference beam larger than spatial frequency of the tested object wave in principle. In addition, because the PD method belongs to a pure local method with no need of any integral operation or phase shifting algorithm in process of the phase retrieval, it could have some advantages in reducing computer load and memory requirements to the image processing system. Some experimental results are given to demonstrate the feasibility of the method.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Waveform Catalog, Extreme Mass Ratio Binary (Capture)
National Aeronautics and Space Administration — Numerically-generated gravitational waveforms for circular inspiral into Kerr black holes. These waveforms were developed using Scott Hughes' black hole perturbation...
Source-independent elastic waveform inversion using a logarithmic wavefield
Choi, Yun Seok
2012-01-01
The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.
SURFACE FITTING FILTERING OF LIDAR POINT CLOUD WITH WAVEFORM INFORMATION
Directory of Open Access Journals (Sweden)
S. Xing
2017-09-01
Full Text Available Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from “WATER (Watershed Allied Telemetry Experimental Research” are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures
International Nuclear Information System (INIS)
Mejia-Barbosa, Y.
2000-03-01
We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)
McCloskey, Rosemary M.; Liang, Richard H.; Harrigan, P. Richard; Brumme, Zabrina L.
2014-01-01
ABSTRACT A population of human immunodeficiency virus (HIV) within a host often descends from a single transmitted/founder virus. The high mutation rate of HIV, coupled with long delays between infection and diagnosis, make isolating and characterizing this strain a challenge. In theory, ancestral reconstruction could be used to recover this strain from sequences sampled in chronic infection; however, the accuracy of phylogenetic techniques in this context is unknown. To evaluate the accuracy of these methods, we applied ancestral reconstruction to a large panel of published longitudinal clonal and/or single-genome-amplification HIV sequence data sets with at least one intrapatient sequence set sampled within 6 months of infection or seroconversion (n = 19,486 sequences, median [interquartile range] = 49 [20 to 86] sequences/set). The consensus of the earliest sequences was used as the best possible estimate of the transmitted/founder. These sequences were compared to ancestral reconstructions from sequences sampled at later time points using both phylogenetic and phylogeny-naive methods. Overall, phylogenetic methods conferred a 16% improvement in reproducing the consensus of early sequences, compared to phylogeny-naive methods. This relative advantage increased with intrapatient sequence diversity (P reconstructing ancestral indel variation, especially within indel-rich regions of the HIV genome. Although further improvements are needed, our results indicate that phylogenetic methods for ancestral reconstruction significantly outperform phylogeny-naive alternatives, and we identify experimental conditions and study designs that can enhance accuracy of transmitted/founder virus reconstruction. IMPORTANCE When HIV is transmitted into a new host, most of the viruses fail to infect host cells. Consequently, an HIV infection tends to be descended from a single “founder” virus. A priority target for the vaccine research, these transmitted/founder viruses are
Anatomic and histological characteristics of vagina reconstructed by McIndoe method
Directory of Open Access Journals (Sweden)
Kozarski Jefta
2009-01-01
Full Text Available Background/Aim. Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH syndrome and compare them with normal vagina. Methods. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau. Comparatively, 21 females of 18 and more years with normal vaginas were also studied. All the subjects were divided into the groups R (reconstructed and C (control and the subgroups according to age up to 30 years (1 R, 1C, from 30 to 50 (2R, 2C, and over 50 (3R, 3C. Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p < 0.05 was considered statistically significant. Results. The results show that there are differences in the depth and the wideness of reconstructed vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. Conclusion. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.
Energy Technology Data Exchange (ETDEWEB)
Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)
2010-08-15
Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)
International Nuclear Information System (INIS)
Milechina, L.; Cederwall, B.
2003-01-01
Gamma-ray tracking, a new detection technique for nuclear spectroscopy, requires efficient algorithms for reconstructing the interaction paths of multiple γ rays in a detector volume. In the present work, we discuss the effect of the atomic electron momentum distribution in Ge as well as employment of different types of figure-of-merit within the context of the so called backtracking method
Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR
Directory of Open Access Journals (Sweden)
Hou Liying
2016-10-01
Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
Tomography reconstruction methods for damage diagnosis of wood structure in construction field
Qiu, Qiwen; Lau, Denvid
2018-03-01
The structural integrity of wood building element plays a critical role in the public safety, which requires effective methods for diagnosis of internal damage inside the wood body. Conventionally, the non-destructive testing (NDT) methods such as X-ray computed tomography, thermography, radar imaging reconstruction method, ultrasonic tomography, nuclear magnetic imaging techniques, and sonic tomography have been used to obtain the information about the internal structure of wood. In this paper, the applications, advantages and disadvantages of these traditional tomography methods are reviewed. Additionally, the present article gives an overview of recently developed tomography approach that relies on the use of mechanical and electromagnetic waves for assessing the structural integrity of wood buildings. This developed tomography reconstruction method is believed to provide a more accurate, reliable, and comprehensive assessment of wood structural integrity
Full Waveform Inversion for Reservoir Characterization - A Synthetic Study
Zabihi Naeini, E.; Kamath, N.; Tsvankin, I.; Alkhalifah, Tariq Ali
2017-01-01
Most current reservoir-characterization workflows are based on classic amplitude-variation-with-offset (AVO) inversion techniques. Although these methods have generally served us well over the years, here we examine full-waveform inversion (FWI
Solanki, K.; Hauksson, E.; Kanamori, H.; Wu, Y.; Heaton, T.; Boese, M.
2007-12-01
We have implemented an on-site early warning algorithm using the infrastructure of the Caltech/USGS Southern California Seismic Network (SCSN). We are evaluating the real-time performance of the software system and the algorithm for rapid assessment of earthquakes. In addition, we are interested in understanding what parts of the SCSN need to be improved to make early warning practical. Our EEW processing system is composed of many independent programs that process waveforms in real-time. The codes were generated by using a software framework. The Pd (maximum displacement amplitude of P wave during the first 3sec) and Tau-c (a period parameter during the first 3 sec) values determined during the EEW processing are being forwarded to the California Integrated Seismic Network (CISN) web page for independent evaluation of the results. The on-site algorithm measures the amplitude of the P-wave (Pd) and the frequency content of the P-wave during the first three seconds (Tau-c). The Pd and the Tau-c values make it possible to discriminate between a variety of events such as large distant events, nearby small events, and potentially damaging nearby events. The Pd can be used to infer the expected maximum ground shaking. The method relies on data from a single station although it will become more reliable if readings from several stations are associated. To eliminate false triggers from stations with high background noise level, we have created per station Pd threshold configuration for the Pd/Tau-c algorithm. To determine appropriate values for the Pd threshold we calculate Pd thresholds for stations based on the information from the EEW logs. We have operated our EEW test system for about a year and recorded numerous earthquakes in the magnitude range from M3 to M5. Two recent examples are a M4.5 earthquake near Chatsworth and a M4.7 earthquake near Elsinore. In both cases, the Pd and Tau-c parameters were determined successfully within 10 to 20 sec of the arrival of the
Optical properties reconstruction using the adjoint method based on the radiative transfer equation
Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir
2018-01-01
An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.
Rehanging Reynolds at the British Institution: Methods for Reconstructing Ephemeral Displays
Directory of Open Access Journals (Sweden)
Catherine Roach
2016-11-01
Full Text Available Reconstructions of historic exhibitions made with current technologies can present beguiling illusions, but they also put us in danger of recreating the past in our own image. This article and the accompanying reconstruction explore methods for representing lost displays, with an emphasis on visualizing uncertainty, illuminating process, and understanding the mediated nature of period images. These issues are highlighted in a partial recreation of a loan show held at the British Institution, London, in 1823, which featured the works of Sir Joshua Reynolds alongside continental old masters. This recreation demonstrates how speculative reconstructions can nonetheless shed light on ephemeral displays, revealing powerful visual and conceptual dialogues that took place on the crowded walls of nineteenth-century exhibitions.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models
Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel
2014-07-01
We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Generation of correlated finite alphabet waveforms using gaussian random variables
Jardak, Seifallah
2014-09-01
Correlated waveforms have a number of applications in different fields, such as radar and communication. It is very easy to generate correlated waveforms using infinite alphabets, but for some of the applications, it is very challenging to use them in practice. Moreover, to generate infinite alphabet constant envelope correlated waveforms, the available research uses iterative algorithms, which are computationally very expensive. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method map the Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability-density-function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. To generate equiprobable symbols, the area of each region is kept same. If the requirement is to have each symbol with its own unique probability, the proposed scheme allows us that as well. Although, the proposed scheme is general, the main focus of this paper is to generate finite alphabet waveforms for multiple-input multiple-output radar, where correlated waveforms are used to achieve desired beampatterns. © 2014 IEEE.
Direct fourier methods in 3D-reconstruction from cone-beam data
International Nuclear Information System (INIS)
Axelsson, C.
1994-01-01
The problem of 3D-reconstruction is encountered in both medical and industrial applications of X-ray tomography. A method able to utilize a complete set of projections complying with Tuys condition was proposed by Grangeat. His method is mathematically exact and consists of two distinct phases. In phase 1 cone-beam projection data are used to produce the derivative of the radon transform. In phase 2, after interpolation, the radon transform data are used to reconstruct the three-dimensional object function. To a large extent our method is an extension of the Grangeat method. Our aim is to reduce the computational complexity, i.e. to produce a faster method. The most taxing procedure during phase 1 is computation of line-integrals in the detector plane. By applying the direct Fourier method in reverse for this computation, we reduce the complexity of phase 1 from O(N 4 ) to O(N 3 logN). Phase 2 can be performed either as a straight 3D-reconstruction or as a sequence of two 2D-reconstructions in vertical and horizontal planes, respectively. Direct Fourier methods can be applied for the 2D- and for the 3D-reconstruction, which reduces the complexity of phase 2 from O(N 4 ) to O(N 3 logN) as well. In both cases, linogram techniques are applied. For 3D-reconstruction the inversion formula contains the second derivative filter instead of the well-known ramp-filter employed in the 2D-case. The derivative filter is more well-behaved than the 2D ramp-filter. This implies that less zeropadding is necessary which brings about a further reduction of the computational efforts. The method has been verified by experiments on simulated data. The image quality is satisfactory and independent of cone-beam angles. For a 512 3 volume we estimate that our method is ten times faster than Grangeats method
Directory of Open Access Journals (Sweden)
ROXANA VĂIDEAN
2015-10-01
Full Text Available Debris Flow Activity Reconstruction Using Dendrogeomorphological Methods. Study Case (Piule Iorgovanu Mountains. Debris flows are one of the most destructive mass-movements that manifest in the mountainous regions around the world. As they usually occur on the steep slopes of the mountain streams where human settlements are scarce, they are hardly monitored. But when they do interact with builtup areas or transportation corridors they cause enormous damages and even casualties. The rise of human pressure in the hazardous regions has led to an increase in the severity of the negative consequences related to debris flows. Consequently, a complete database for hazard assessment of the areas which show evidence of debris flow activity is needed. Because of the lack of archival records knowledge about their frequency remains poor. One of the most precise methods used in the reconstruction of past debris flow activity are dendrogeomorphological methods. Using growth anomalies of the affected trees, a valuable event chronology can be obtained. Therefore, it is the purpose of this study to reconstruct debris flow activity on a small catchment located on the northern slope of Piule Iorgovanu Mountains. The trees growing near the channel of transport and on the debris fan, exhibit different types of disturbances. A number of 98 increment cores, 19 cross-sections and 1 semi-transversal cross-section was used. Based on the growth anomalies identified in the samples there were reconstructed a number of 19 events spanning a period of almost a century.
Influence of image reconstruction methods on statistical parametric mapping of brain PET images
International Nuclear Information System (INIS)
Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan
2007-01-01
Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)
A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data
Directory of Open Access Journals (Sweden)
Bo Guo
2016-03-01
Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.
Anisotropic wave-equation traveltime and waveform inversion
Feng, Shihang
2016-09-06
The wave-equation traveltime and waveform inversion (WTW) methodology is developed to invert for anisotropic parameters in a vertical transverse isotropic (VTI) meidum. The simultaneous inversion of anisotropic parameters v0, ε and δ is initially performed using the wave-equation traveltime inversion (WT) method. The WT tomograms are then used as starting background models for VTI full waveform inversion. Preliminary numerical tests on synthetic data demonstrate the feasibility of this method for multi-parameter inversion.
High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method
International Nuclear Information System (INIS)
Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.
1984-01-01
Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)
International Nuclear Information System (INIS)
Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin
2017-01-01
The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm. (paper)
Propagation compensation by waveform predistortion
Halpin, Thomas F.; Urkowitz, Harry; Maron, David E.
Certain modifications of the Cobra Dane radar are considered, particularly modernization of the waveform generator. For wideband waveforms, the dispersive effects of the ionosphere become increasingly significant. The technique of predistorting the transmitted waveform so that a linear chirp is received after two-way passage is one way to overcome that dispersion. This approach is maintained for the modified system, but with a specific predistortion waveform well suited to the modification. The appropriate form of predistortion was derived in an implicit form of time as a function of frequency. The exact form was approximated by Taylor series and pseudo-Chebyshev approximation. The latter proved better, as demonstrated by the resulting smaller loss in detection sensitivity, less coarsening of range resolution, and a lower peak sidelobe. The effects of error in determining the plasma delay constant were determined and are given in graphical form. A suggestion for in-place determination of the plasma delay constant is given.
System and method for image reconstruction, analysis, and/or de-noising
Laleg-Kirati, Taous-Meriem
2015-11-12
A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.
International Nuclear Information System (INIS)
Tuna, U.; Johansson, J.; Ruotsalainen, U.
2014-01-01
The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with
Restoration of the analytically reconstructed OpenPET images by the method of convex projections
Energy Technology Data Exchange (ETDEWEB)
Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering
2011-07-01
We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)
Quartet-net: a quartet-based method to reconstruct phylogenetic networks.
Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng
2013-05-01
Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Tae Joon Choi
2016-01-01
Full Text Available Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic.
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P
2011-03-01
This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats. Copyright © 2010 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Zbijewski, Wojciech; Beekman, Freek J
2006-01-01
X-ray CT images obtained with iterative reconstruction (IR) can be hampered by the so-called edge and aliasing artefacts, which appear as interference patterns and severe overshoots in the areas of sharp intensity transitions. Previously, we have demonstrated that these artefacts are caused by discretization errors during the projection simulation step in IR. Although these errors are inherent to IR, they can be adequately suppressed by reconstruction on an image grid that is finer than that typically used for analytical methods such as filtered back-projection. Two other methods that may prevent edge artefacts are: (i) smoothing the projections prior to reconstruction or (ii) using an image representation different from voxels; spherically symmetric Kaiser-Bessel functions are a frequently employed example of such a representation. In this paper, we compare reconstruction on a fine grid with the two above-mentioned alternative strategies for edge artefact reduction. We show that the use of a fine grid results in a more adequate suppression of artefacts than the smoothing of projections or using the Kaiser-Bessel image representation
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
International Nuclear Information System (INIS)
Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-01-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
Energy Technology Data Exchange (ETDEWEB)
Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-08-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0
Directory of Open Access Journals (Sweden)
Sarah D. Lichenstein
2016-09-01
Full Text Available Purpose: Diffusion MRI provides a non-invasive way of estimating structural connectivity in the brain. Many studies have used diffusion phantoms as benchmarks to assess the performance of different tractography reconstruction algorithms and assumed that the results can be applied to in vivo studies. Here we examined whether quality metrics derived from a common, publically available, diffusion phantom can reliably predict tractography performance in human white matter tissue. Material and Methods: We compared estimates of fiber length and fiber crossing among a simple tensor model (diffusion tensor imaging, a more complicated model (ball-and-sticks and model-free (diffusion spectrum imaging, generalized q-sampling imaging reconstruction methods using a capillary phantom and in vivo human data (N=14. Results: Our analysis showed that evaluation outcomes differ depending on whether they were obtained from phantom or human data. Specifically, the diffusion phantom favored a more complicated model over a simple tensor model or model-free methods for resolving crossing fibers. On the other hand, the human studies showed the opposite pattern of results, with the model-free methods being more advantageous than model-based methods or simple tensor models. This performance difference was consistent across several metrics, including estimating fiber length and resolving fiber crossings in established white matter pathways. Conclusions: These findings indicate that the construction of current capillary diffusion phantoms tends to favor complicated reconstruction models over a simple tensor model or model-free methods, whereas the in vivo data tends to produce opposite results. This brings into question the previous phantom-based evaluation approaches and suggests that a more realistic phantom or simulation is necessary to accurately predict the relative performance of different tractography reconstruction methods. Acronyms: BSM: ball-and-sticks model; d
Advanced Waveform Simulation for Seismic Monitoring
2008-09-01
velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radial components), Rayleigh (vertical and...ranges out to 10°, including extensive observations of crustal thinning and thickening and various Pnl complexities. Broadband modeling in 1D, 2D...existing models perform in predicting the various regional phases, Rayleigh waves, Love waves, and Pnl waves. Previous events from this Basin-and-Range
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Neural network CT image reconstruction method for small amount of projection data
Ma, X F; Takeda, T
2000-01-01
This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.
Neural network CT image reconstruction method for small amount of projection data
International Nuclear Information System (INIS)
Ma, X.F.; Fukuhara, M.; Takeda, T.
2000-01-01
This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications
A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.
Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo
2010-01-01
In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.
International Nuclear Information System (INIS)
Laraufie, Romain; Deck, Sébastien
2013-01-01
Highlights: • Present various Reynolds stresses reconstruction methods from a RANS-SA flow field. • Quantify the accuracy of the reconstruction methods for a wide range of Reynolds. • Evaluate the capabilities of the overall process (Reconstruction + SEM). • Provide practical guidelines to realize a streamwise RANS/LES (or WMLES) transition. -- Abstract: Hybrid or zonal RANS/LES approaches are recognized as the most promising way to accurately simulate complex unsteady flows under current computational limitations. One still open issue concerns the transition from a RANS to a LES or WMLES resolution in the stream-wise direction, when near wall turbulence is involved. Turbulence content has then to be prescribed at the transition to prevent from turbulence decay leading to possible flow relaminarization. The present paper aims to propose an efficient way to generate this switch, within the flow, based on a synthetic turbulence inflow condition, named Synthetic Eddy Method (SEM). As the knowledge of the whole Reynolds stresses is often missing, the scope of this paper is focused on generating the quantities required at the SEM inlet from a RANS calculation, namely the first and second order statistics of the aerodynamic field. Three different methods based on two different approaches are presented and their capability to accurately generate the needed aerodynamic values is investigated. Then, the ability of the combination SEM + Reconstruction method to manufacture well-behaved turbulence is demonstrated through spatially developing flat plate turbulent boundary layers. In the mean time, important intrinsic features of the Synthetic Eddy method are pointed out. The necessity of introducing, within the SEM, accurate data, with regards to the outer part of the boundary layer, is illustrated. Finally, user’s guidelines are given depending on the Reynolds number based on the momentum thickness, since one method is suitable for low Reynolds number while the
Methods for the reconstruction of large scale anisotropies of the cosmic ray flux
Energy Technology Data Exchange (ETDEWEB)
Over, Sven
2010-01-15
In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun
2018-01-01
Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N + 1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.
Evaluation of two methods for using MR information in PET reconstruction
International Nuclear Information System (INIS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-01-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed
Generation of correlated finite alphabet waveforms using gaussian random variables
Ahmed, Sajid
2016-01-13
Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.
Generation of correlated finite alphabet waveforms using gaussian random variables
Ahmed, Sajid; Alouini, Mohamed-Slim; Jardak, Seifallah
2016-01-01
Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.
Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models
Directory of Open Access Journals (Sweden)
Scott E. Field
2014-07-01
Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in
Statistical gravitational waveform models: What to simulate next?
Doctor, Zoheyr; Farr, Ben; Holz, Daniel E.; Pürrer, Michael
2017-12-01
Models of gravitational waveforms play a critical role in detecting and characterizing the gravitational waves (GWs) from compact binary coalescences. Waveforms from numerical relativity (NR), while highly accurate, are too computationally expensive to produce to be directly used with Bayesian parameter estimation tools like Markov-chain-Monte-Carlo and nested sampling. We propose a Gaussian process regression (GPR) method to generate reduced-order-model waveforms based only on existing accurate (e.g. NR) simulations. Using a training set of simulated waveforms, our GPR approach produces interpolated waveforms along with uncertainties across the parameter space. As a proof of concept, we use a training set of IMRPhenomD waveforms to build a GPR model in the 2-d parameter space of mass ratio q and equal-and-aligned spin χ1=χ2. Using a regular, equally-spaced grid of 120 IMRPhenomD training waveforms in q ∈[1 ,3 ] and χ1∈[-0.5 ,0.5 ], the GPR mean approximates IMRPhenomD in this space to mismatches below 4.3 ×10-5. Our approach could in principle use training waveforms directly from numerical relativity. Beyond interpolation of waveforms, we also present a greedy algorithm that utilizes the errors provided by our GPR model to optimize the placement of future simulations. In a fiducial test case we find that using the greedy algorithm to iteratively add simulations achieves GPR errors that are ˜1 order of magnitude lower than the errors from using Latin-hypercube or square training grids.
Adaptive phase k-means algorithm for waveform classification
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
Hirahara, Noriyuki; Monma, Hiroyuki; Shimojo, Yoshihide; Matsubara, Takeshi; Hyakudomi, Ryoji; Yano, Seiji; Tanaka, Tsuneo
2011-01-01
Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST) using a trans-oral anvil delivery system (EEATM OrVilTM) for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off...
Does thorax EIT image analysis depend on the image reconstruction method?
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
A comparison of reconstruction methods for undersampled atomic force microscopy images
International Nuclear Information System (INIS)
Luo, Yufan; Andersson, Sean B
2015-01-01
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip–sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images. (paper)
A feasible method for clinical delivery verification and dose reconstruction in tomotherapy
International Nuclear Information System (INIS)
Kapatoes, J.M.; Olivera, G.H.; Ruchala, K.J.; Smilowitz, J.B.; Reckwerdt, P.J.; Mackie, T.R.
2001-01-01
Delivery verification is the process in which the energy fluence delivered during a treatment is verified. This verified energy fluence can be used in conjunction with an image in the treatment position to reconstruct the full three-dimensional dose deposited. A method for delivery verification that utilizes a measured database of detector signal is described in this work. This database is a function of two parameters, radiological path-length and detector-to-phantom distance, both of which are computed from a CT image taken at the time of delivery. Such a database was generated and used to perform delivery verification and dose reconstruction. Two experiments were conducted: a simulated prostate delivery on an inhomogeneous abdominal phantom, and a nasopharyngeal delivery on a dog cadaver. For both cases, it was found that the verified fluence and dose results using the database approach agreed very well with those using previously developed and proven techniques. Delivery verification with a measured database and CT image at the time of treatment is an accurate procedure for tomotherapy. The database eliminates the need for any patient-specific, pre- or post-treatment measurements. Moreover, such an approach creates an opportunity for accurate, real-time delivery verification and dose reconstruction given fast image reconstruction and dose computation tools
An eigenfunction method for reconstruction of large-scale and high-contrast objects.
Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P
2007-07-01
A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.
RECONSTRUCTING THE INITIAL DENSITY FIELD OF THE LOCAL UNIVERSE: METHODS AND TESTS WITH MOCK CATALOGS
International Nuclear Information System (INIS)
Wang Huiyuan; Mo, H. J.; Yang Xiaohu; Van den Bosch, Frank C.
2013-01-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3∼ –1 , much smaller than the translinear scale, which corresponds to a wavenumber of ∼0.15 h Mpc –1
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
International Nuclear Information System (INIS)
Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y
2016-01-01
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
Energy Technology Data Exchange (ETDEWEB)
Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
International Nuclear Information System (INIS)
Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn, C.
1982-09-01
The aim of this study is to evaluate the potential of the RIM technique when used in brain studies. The analytical Regulatorizing Iterative Method (RIM) is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with FBP (Filtered Back Projection) technique. Preliminary results obtained in brain studies using AMPI-123 (isopropil-amphetamine I-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated in our Institution, in comparing quantitative data in heart or liver studies where control values can be obtained
International Nuclear Information System (INIS)
Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn
1982-01-01
The potential of the Regularizing Iterative Method (RIM), when used in brain studies, is evaluated. RIM is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with Filtered Back Projection (FBP) technique. Preliminary results obtained in brain studies using isopropil-amphetamine I-123 (AMPI-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated, in comparing quantitative data in heart or liver studies where control values can be obtained
International Nuclear Information System (INIS)
Gao, H
2016-01-01
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).
International Nuclear Information System (INIS)
Mieville, Frederic A.; Gudinchet, Francois; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Bochud, Francois O.; Verdun, Francis R.
2011-01-01
Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI vol 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)
Grant, K.; Rohling, E. J.; Amies, J.
2017-12-01
Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.
Energy Technology Data Exchange (ETDEWEB)
Mieville, Frederic A. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland); University Hospital Center and University of Lausanne, Institute of Radiation Physics - Medical Radiology, Lausanne (Switzerland); Gudinchet, Francois; Rizzo, Elena [University Hospital Center and University of Lausanne, Department of Radiology, Lausanne (Switzerland); Ou, Phalla; Brunelle, Francis [Necker Children' s Hospital, Department of Radiology, Paris (France); Bochud, Francois O.; Verdun, Francis R. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland)
2011-09-15
Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI{sub vol} 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)
Reconstruction of action potential of repolarization in patients with congenital long-QT syndrome
International Nuclear Information System (INIS)
Kandori, Akihiko; Shimizu, Wataru; Yokokawa, Miki; Kamakura, Shiro; Miyatake, Kunio; Murakami, Masahiro; Miyashita, Tsuyoshi; Ogata, Kuniomi; Tsukada, Keiji
2004-01-01
A method for reconstructing an action potential during the repolarization period was developed. This method uses a current distribution-plotted as a current-arrow map (CAM)-calculated using magnetocardiogram (MCG) signals. The current arrows are summarized during the QRS complex period and subtracted during the ST-T wave period in order to reconstruct the action-potential waveform. To ensure the similarity between a real action potential and the reconstructed action potential using CAM, a monophasic action potential (MAP) and an MCG of the same patient with type-I long-QT syndrome were measured. Although the MAP had one notch that was associated with early afterdepolarization (EAD), the reconstructed action potential had two large and small notches. The small notch timing agreed with the occurrence of the EAD in the MAP. On the other hand, the initiation time of an abnormal current distribution coincides with the appearance timing of the first large notch, and its end time coincides with that of the second small notch. These results suggest that a simple reconstruction method using a CAM based on MCG data can provide a similar action-potential waveform to a MAP waveform without having to introduce a catheter
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Energy Technology Data Exchange (ETDEWEB)
Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B. [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30318 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States)
2012-09-15
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on
Directory of Open Access Journals (Sweden)
Kravtsenyuk Olga V
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.
Directory of Open Access Journals (Sweden)
Vladimir V. Lyubimov
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
A new method for three-dimensional laparoscopic ultrasound model reconstruction
DEFF Research Database (Denmark)
Fristrup, C W; Pless, T; Durup, J
2004-01-01
BACKGROUND: Laparoscopic ultrasound is an important modality in the staging of gastrointestinal tumors. Correct staging depends on good spatial understanding of the regional tumor infiltration. Three-dimensional (3D) models may facilitate the evaluation of tumor infiltration. The aim of the study...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...
Liu, Lu
2017-08-17
This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source-domain FTI is capable of automatically generating a background velocity that can kinematically match the reconstructed plane-wave sources of early arrivals with true plane-wave sources. This method does not require picking first arrivals for inversion, which is one of the most challenging aspects of ray-based first-arrival tomographic inversion. Moreover, compared with conventional Born-based methods, source-domain FTI can distinguish between slower or faster initial model errors via providing the correct sign of the model gradient. In addition, this method does not need estimation of the source wavelet, which is a requirement for receiver-domain wave-equation velocity inversion. The model derived from source-domain FTI is then used as input to early-arrival waveform inversion to obtain the short-wavelength velocity components. We have tested the workflow on synthetic and field seismic data sets. The results show source-domain FTI can generate reasonable background velocities for early-arrival waveform inversion even when subsurface velocity reversals are present and the workflow can produce a high-resolution near-surface velocity model.
Stephen, Joanna M; Kittl, Christoph; Williams, Andy; Zaffagnini, Stefano; Marcheggiani Muccioli, Giulio Maria; Fink, Christian; Amis, Andrew A
2016-05-01
There remains a lack of evidence regarding the optimal method when reconstructing the medial patellofemoral ligament (MPFL) and whether some graft constructs can be more forgiving to surgical errors, such as overtensioning or tunnel malpositioning, than others. The null hypothesis was that there would not be a significant difference between reconstruction methods (eg, graft type and fixation) in the adverse biomechanical effects (eg, patellar maltracking or elevated articular contact pressure) resulting from surgical errors such as tunnel malpositioning or graft overtensioning. Controlled laboratory study. Nine fresh-frozen cadaveric knees were placed on a customized testing rig, where the femur was fixed but the tibia could be moved freely from 0° to 90° of flexion. Individual quadriceps heads and the iliotibial tract were separated and loaded to 205 N of tension using a weighted pulley system. Patellofemoral contact pressures and patellar tracking were measured at 0°, 10°, 20°, 30°, 60°, and 90° of flexion using pressure-sensitive film inserted between the patella and trochlea, in conjunction with an optical tracking system. The MPFL was transected and then reconstructed in a randomized order using a (1) double-strand gracilis tendon, (2) quadriceps tendon, and (3) tensor fasciae latae allograft. Pressure maps and tracking measurements were recorded for each reconstruction method in 2 N and 10 N of tension and with the graft positioned in the anatomic, proximal, and distal femoral tunnel positions. Statistical analysis was undertaken using repeated-measures analyses of variance, Bonferroni post hoc analyses, and paired t tests. Anatomically placed grafts during MPFL reconstruction tensioned to 2 N resulted in the restoration of intact medial joint contact pressures and patellar tracking for all 3 graft types investigated (P > .050). However, femoral tunnels positioned proximal or distal to the anatomic origin resulted in significant increases in the mean
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
International Nuclear Information System (INIS)
Gao, Z.; Xu, Y.; Downar, T.
2013-01-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
Energy Technology Data Exchange (ETDEWEB)
Gao, Z. [Rice University, MS 318, 6100 Main Street, Houston, TX 77005 (United States); Xu, Y. [Argonne National Laboratory, 9700 South Case Ave., Argonne, IL 60439 (United States); Downar, T. [Department of Nuclear Engineering, University of Michigan, 2355 Bonisteel blvd., Ann Arbor, MI 48109 (United States)
2013-07-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
Lane marking detection based on waveform analysis and CNN
Ye, Yang Yang; Chen, Hou Jin; Hao, Xiao Li
2017-06-01
Lane markings detection is a very important part of the ADAS to avoid traffic accidents. In order to obtain accurate lane markings, in this work, a novel and efficient algorithm is proposed, which analyses the waveform generated from the road image after inverse perspective mapping (IPM). The algorithm includes two main stages: the first stage uses an image preprocessing including a CNN to reduce the background and enhance the lane markings. The second stage obtains the waveform of the road image and analyzes the waveform to get lanes. The contribution of this work is that we introduce local and global features of the waveform to detect the lane markings. The results indicate the proposed method is robust in detecting and fitting the lane markings.
Faithful effective-one-body waveforms of small-mass-ratio coalescing black hole binaries
International Nuclear Information System (INIS)
Damour, Thibault; Nagar, Alessandro
2007-01-01
We address the problem of constructing high-accuracy, faithful analytic waveforms describing the gravitational wave signal emitted by inspiralling and coalescing binary black holes. We work within the effective-one-body (EOB) framework and propose a methodology for improving the current (waveform) implementations of this framework based on understanding, element by element, the physics behind each feature of the waveform and on systematically comparing various EOB-based waveforms with exact waveforms obtained by numerical relativity approaches. The present paper focuses on small-mass-ratio nonspinning binary systems, which can be conveniently studied by Regge-Wheeler-Zerilli-type methods. Our results include (i) a resummed, 3 PN-accurate description of the inspiral waveform, (ii) a better description of radiation reaction during the plunge, (iii) a refined analytic expression for the plunge waveform, (iv) an improved treatment of the matching between the plunge and ring-down waveforms. This improved implementation of the EOB approach allows us to construct complete analytic waveforms which exhibit a remarkable agreement with the exact ones in modulus, frequency, and phase. In particular, the analytic and numerical waveforms stay in phase, during the whole process, within ±1.1% of a cycle. We expect that the extension of our methodology to the comparable-mass case will be able to generate comparably accurate analytic waveforms of direct use for the ground-based network of interferometric detectors of gravitational waves
Benazzi, S; Stansfield, E; Milani, C; Gruppioni, G
2009-07-01
The process of forensic identification of missing individuals is frequently reliant on the superimposition of cranial remains onto an individual's picture and/or facial reconstruction. In the latter, the integrity of the skull or a cranium is an important factor in successful identification. Here, we recommend the usage of computerized virtual reconstruction and geometric morphometrics for the purposes of individual reconstruction and identification in forensics. We apply these methods to reconstruct a complete cranium from facial remains that allegedly belong to the famous Italian humanist of the fifteenth century, Angelo Poliziano (1454-1494). Raw data was obtained by computed tomography scans of the Poliziano face and a complete reference skull of a 37-year-old Italian male. Given that the amount of distortion of the facial remains is unknown, two reconstructions are proposed: The first calculates the average shape between the original and its reflection, and the second discards the less preserved left side of the cranium under the assumption that there is no deformation on the right. Both reconstructions perform well in the superimposition with the original preserved facial surface in a virtual environment. The reconstruction by means of averaging between the original and reflection yielded better results during the superimposition with portraits of Poliziano. We argue that the combination of computerized virtual reconstruction and geometric morphometric methods offers a number of advantages over traditional plastic reconstruction, among which are speed, reproducibility, easiness of manipulation when superimposing with pictures in virtual environment, and assumptions control.
Developing a framework for evaluating tallgrass prairie reconstruction methods and management
Larson, Diane L.; Ahlering, Marissa; Drobney, Pauline; Esser, Rebecca; Larson, Jennifer L.; Viste-Sparkman, Karen
2018-01-01
The thousands of hectares of prairie reconstructed each year in the tallgrass prairie biome can provide a valuable resource for evaluation of seed mixes, planting methods, and post-planting management if methods used and resulting characteristics of the prairies are recorded and compiled in a publicly accessible database. The objective of this study was to evaluate the use of such data to understand the outcomes of reconstructions over a 10-year period at two U.S. Fish and Wildlife Service refuges. Variables included number of species planted, seed source (combine-harvest or combine-harvest plus hand-collected), fire history, and planting method and season. In 2015 we surveyed vegetation on 81 reconstructions and calculated proportion of planted species observed; introduced species richness; native species richness, evenness and diversity; and mean coefficient of conservatism. We conducted exploratory analyses to learn how implied communities based on seed mix compared with observed vegetation; which seeding or management variables were influential in the outcome of the reconstructions; and consistency of responses between the two refuges. Insights from this analysis include: 1) proportion of planted species observed in 2015 declined as planted richness increased, but lack of data on seeding rate per species limited conclusions about value of added species; 2) differing responses to seeding and management between the two refuges suggest the importance of geographic variability that could be addressed using a public database; and 3) variables such as fire history are difficult to quantify consistently and should be carefully evaluated in the context of a public data repository.
Jiang, Y.; Xing, H. L.
2016-12-01
Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the
International Nuclear Information System (INIS)
Pill-Hoon Choung
1999-01-01
Although there are various applications of allogenic bone grafts, a new technique of prevascularized lyophilized allogenic bone grafting for maxillo-mandibular reconstruction will be presented. Allogenic bone has been made by author's protocol for jaw defects as a powder, chip or block bone type. The author used lyophilized allogenic bone grafts for discontinuity defects as a block bone. In those cases, neovascularization and resorption of the allogenic bone were important factors for success of grafting. To overcome the problems, the author designed the technique of prefabricated vascularization of allogenic bone, which was lyophilized cranium, with an application of bovine BMP or not. Lyophilized cranial bone was designed for the defects and was put into the scalp. After confirming a hot spot via scintigram several months later, vascularized allogenic bone was harvested pedicled on the parietotemporal fascia based on the superficial temporal artery and vein. Vascularized allogenic cranial bone was rotated into the defect and fixed rigidly. Postoperatively, there was no severe resorption and functional disturbance of the mandible. In this technique, BMP seems to be an important role to help osteogenesis and neovascularization. Eight patients underwent prefabricated vascularization of allogenic bone grafts. Among them, four cases of reconstruction in mandibular discontinuity defects and one case of reconstruction in maxillectomy defect underwent this method, which will be presented with good results. This method may be an alternative technique of microvascular free bone graft
International Nuclear Information System (INIS)
Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.
1983-06-01
An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted
International Nuclear Information System (INIS)
Manrique, John Peter O.; Costa, Alessandro M.
2016-01-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)
DEFF Research Database (Denmark)
Senturk, Osman Selcuk; Hava, Ahmet M.
2009-01-01
. The SAF-compensated system utilizing WRM provides highperformance load harmonic voltage isolation and load voltage regulation at steady-state and during transients compared to the system utilizing the synchronous reference-frame-based signal decomposition. In addition, reducing the line current sampling...
Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method
DEFF Research Database (Denmark)
Jeong, Cheol-Ho; Ih, Jeong-Guon
2009-01-01
. First, surfaces of an extended source are divided into reasonably small segments. From each source segment, one beam is projected into the field and all emitted beams are traced. Radiated beams from the source reach array sensors after traveling various paths including the wall reflections. Collecting...... all the pressure histories at the field points, source-observer relations can be constructed in a matrix-vector form for each frequency. By multiplying the measured field data with the pseudo-inverse of the calculated transfer function, one obtains the distribution of source pressure. An omni......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...
Brief review of image reconstruction methods for imaging in nuclear medicine
International Nuclear Information System (INIS)
Murayama, Hideo
1999-01-01
Emission computed tomography (ECT) has as its major emphasis the quantitative determination of the moment to moment changes in the chemistry and flow physiology of injected or inhaled compounds labeled with radioactive atoms in a human body. The major difference lies in the fact that ECT seeks to describe the location and intensity of sources of emitted photons in an attenuating medium whereas transmission X-ray computed tomography (TCT) seeks to determine the distribution of the attenuating medium. A second important difference between ECT and TCT is that of available statistics. ECT statistics are low because each photon without control in emitting direction must be detected and analyzed, not as in TCT. The following sections review the historical development of image reconstruction methods for imaging in nuclear medicine, relevant intrinsic concepts for image reconstruction on ECT, and current status of volume imaging as well as a unique approach on iterative techniques for ECT. (author). 130 refs
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
Directory of Open Access Journals (Sweden)
P.L. Israelevich
Full Text Available In this study we test a stream function method suggested by Israelevich and Ershkovich for instantaneous reconstruction of global, high-latitude ionospheric convection patterns from a limited set of experimental observations, namely, from the electric field or ion drift velocity vector measurements taken along two polar satellite orbits only. These two satellite passes subdivide the polar cap into several adjacent areas. Measured electric fields or ion drifts can be considered as boundary conditions (together with the zero electric potential condition at the low-latitude boundary for those areas, and the entire ionospheric convection pattern can be reconstructed as a solution of the boundary value problem for the stream function without any preliminary information on ionospheric conductivities. In order to validate the stream function method, we utilized the IZMIRAN electrodynamic model (IZMEM recently calibrated by the DMSP ionospheric electrostatic potential observations. For the sake of simplicity, we took the modeled electric fields along the noon-midnight and dawn-dusk meridians as the boundary conditions. Then, the solution(s of the boundary value problem (i.e., a reconstructed potential distribution over the entire polar region is compared with the original IZMEM/DMSP electric potential distribution(s, as well as with the various cross cuts of the polar cap. It is found that reconstructed convection patterns are in good agreement with the original modelled patterns in both the northern and southern polar caps. The analysis is carried out for the winter and summer conditions, as well as for a number of configurations of the interplanetary magnetic field.
Key words: Ionosphere (electric fields and currents; plasma convection; modelling and forecasting
Directory of Open Access Journals (Sweden)
Jacob J Setterbo
Full Text Available Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior.To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties.Track-testing device (TTD impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression.Most dynamic surface property setting differences (racetrack-laboratory were small relative to surface material type differences (dirt-synthetic. Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces.Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD.Dynamic impact properties of race surfaces
Impact of reconstruction methods and pathological factors on survival after pancreaticoduodenectomy
Directory of Open Access Journals (Sweden)
Salah Binziad
2013-01-01
Full Text Available Background: Surgery remains the mainstay of therapy for pancreatic head (PH and periampullary carcinoma (PC and provides the only chance of cure. Improvements of surgical technique, increased surgical experience and advances in anesthesia, intensive care and parenteral nutrition have substantially decreased surgical complications and increased survival. We evaluate the effects of reconstruction type, complications and pathological factors on survival and quality of life. Materials and Methods: This is a prospective study to evaluate the impact of various reconstruction methods of the pancreatic remnant after pancreaticoduodenectomy and the pathological characteristics of PC patients over 3.5 years. Patient characteristics and descriptive analysis in the three variable methods either with or without stent were compared with Chi-square test. Multivariate analysis was performed with the logistic regression analysis test and multinomial logistic regression analysis test. Survival rate was analyzed by use Kaplan-Meier test. Results: Forty-one consecutive patients with PC were enrolled. There were 23 men (56.1% and 18 women (43.9%, with a median age of 56 years (16 to 70 years. There were 24 cases of PH cancer, eight cases of PC, four cases of distal CBD cancer and five cases of duodenal carcinoma. Nine patients underwent duct-to-mucosa pancreatico jejunostomy (PJ, 17 patients underwent telescoping pancreatico jejunostomy (PJ and 15 patients pancreaticogastrostomy (PG. The pancreatic duct was stented in 30 patients while in 11 patients, the duct was not stented. The PJ duct-to-mucosa caused significantly less leakage, but longer operative and reconstructive times. Telescoping PJ was associated with the shortest hospital stay. There were 5 postoperative mortalities, while postoperative morbidities included pancreatic fistula-6 patients, delayed gastric emptying in-11, GI fistula-3, wound infection-12, burst abdomen-6 and pulmonary infection-2. Factors
International Nuclear Information System (INIS)
Kosarev, E.L.
1980-01-01
A new method to reconstruct spatial star distribution in globular clusters is presented. The method gives both the estimation of unknown spatial distribution and the probable reconstruction error. This error has statistical origin and depends only on the number of stars in a cluster. The method is applied to reconstruct the spatial density of 441 flare stars in Pleiades. The spatial density has a maximum in the centre of the cluster of about 1.6-2.5 pc -3 and with increasing distance from the center smoothly falls down to zero approximately with the Gaussian law with a scale parameter of 3.5 pc
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
International Nuclear Information System (INIS)
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-01-01
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical
Directory of Open Access Journals (Sweden)
Buyun Sheng
2018-01-01
Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.
Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi; Chowienczyk, Phil
2015-09-01
Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the
Phase-space topography characterization of nonlinear ultrasound waveforms.
Dehghan-Niri, Ehsan; Al-Beer, Helem
2018-03-01
Fundamental understanding of ultrasound interaction with material discontinuities having closed interfaces has many engineering applications such as nondestructive evaluation of defects like kissing bonds and cracks in critical structural and mechanical components. In this paper, to analyze the acoustic field nonlinearities due to defects with closed interfaces, the use of a common technique in nonlinear physics, based on a phase-space topography construction of ultrasound waveform, is proposed. The central idea is to complement the "time" and "frequency" domain analyses with the "phase-space" domain analysis of nonlinear ultrasound waveforms. A nonlinear time series method known as pseudo phase-space topography construction is used to construct equivalent phase-space portrait of measured ultrasound waveforms. Several nonlinear models are considered to numerically simulate nonlinear ultrasound waveforms. The phase-space response of the simulated waveforms is shown to provide different topographic information, while the frequency domain shows similar spectral behavior. Thus, model classification can be substantially enhanced in the phase-space domain. Experimental results on high strength aluminum samples show that the phase-space transformation provides a unique detection and classification capabilities. The Poincaré map of the phase-space domain is also used to better understand the nonlinear behavior of ultrasound waveforms. It is shown that the analysis of ultrasound nonlinearities is more convenient and informative in the phase-space domain than in the frequency domain. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive Waveform Design for Cognitive Radar in Multiple Targets Situations
Directory of Open Access Journals (Sweden)
Xiaowen Zhang
2018-02-01
Full Text Available In this paper, the problem of cognitive radar (CR waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR. To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM signal, waveforms designed based on maximum detection probability and maximum mutual information (MI criteria can make radar echoes contain more multiple-target information and improve radar performance as a result.
Optimal current waveforms for brushless permanent magnet motors
Moehle, Nicholas; Boyd, Stephen
2015-07-01
In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.
3D Electric Waveforms of Solar Wind Turbulence
Kellogg, P. J.; Goetz, K.; Monson, S. J.
2018-01-01
Electric fields provide the major coupling between the turbulence of the solar wind and particles. A large part of the turbulent spectrum of fluctuations in the solar wind is thought to be kinetic Alfvén waves; however, whistlers have recently been found to be important. In this article, we attempt to determine the mode identification of individual waveforms using the three-dimensional antenna system of the SWaves experiments on the STEREO spacecraft. Samples are chosen using waveforms with an apparent periodic structure, selected visually. The short antennas of STEREO respond to density fluctuations and to electric fields. Measurement of four quantities using only three antennas presents a problem. Methods to overcome or to ignore this difficulty are presented. We attempt to decide whether the waveforms correspond to the whistler mode or the Alfvén mode by using the direction of rotation of the signal. Most of the waveforms are so oblique—nearly linearly polarized—that the direction cannot be determined. However, about one third of the waveforms can be identified, and whistlers and Alfvén waves are present in roughly equal numbers. The selected waveforms are very intense but intermittent and are orders of magnitude stronger than the average, yet their accumulated signal accounts for a large fraction of the average. The average, however, is supposed to be the result of a turbulent mixture of many waves, not short coherent events. This presents a puzzle for future work.
A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction
Directory of Open Access Journals (Sweden)
Qiegen Liu
2014-01-01
Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.
A new method of three-dimensional computer assisted reconstruction of the developing biliary tract.
Prudhomme, M; Gaubert-Cristol, R; Jaeger, M; De Reffye, P; Godlewski, G
1999-01-01
A three-dimensional (3-D) computer assisted reconstruction of the biliary tract was performed in human and rat embryos at Carnegie stage 23 to describe and compare the biliary structures and to point out the anatomic relations between the structures of the hepatic pedicle. Light micrograph images from consecutive serial sagittal sections (diameter 7 mm) of one human and 16 rat embryos were directly digitalized with a CCD camera. The serial views were aligned automatically by software. The data were analysed following segmentation and thresholding, allowing automatic reconstruction. The main bile ducts ascended in the mesoderm of the hepatoduodenal ligament. The extrahepatic bile ducts: common bile duct (CD), cystic duct and gallbladder in the human, formed a compound system which could not be shown so clearly in histologic sections. The hepato-pancreatic ampulla was studied as visualised through the duodenum. The course of the CD was like a chicane. The gallbladder diameter and length were similar to those of the CD. Computer-assisted reconstruction permitted easy acquisition of the data by direct examination of the sections through the microscope. This method showed the relationships between the different structures of the hepatic pedicle and allowed estimation of the volume of the bile duct. These findings were not obvious in two-dimensional (2-D) views from histologic sections. Each embryonic stage could be rebuilt in 3-D, which could introduce the time as a fourth dimension, fundamental for the study of organogenesis.
Directory of Open Access Journals (Sweden)
Bakhtiari Jalal
2012-12-01
Full Text Available Abstract Background Laparoscopic gastrectomy is a new and technically challenging surgical procedure with potential benefit. The objective of this study was to investigate clinical and para-clinical consequences following Roux-en-Y and Jejunal Loop interposition reconstructive techniques for subtotal gastrectomy using laparoscopic assisted surgery. Results Following resection of the stomach attachments through a laparoscopic approach, stomach was removed and reconstruction was performed with either standard Roux-en-Y (n = 5 or Jejunal Loop interposition (n = 5 methods. Weight changes were monitored on a daily basis and blood samples were collected on Days 0, 7 and 21 post surgery. A fecal sample was collected on Day 28 after surgery to evaluate fat content. One month post surgery, positive contrast radiography was conducted at 5, 10, 20, 40, 60 and 90 minutes after oral administration of barium sulfate, to evaluate the postoperative complications. There was a gradual decline in body weight in both experimental groups after surgery (P 0.05. Fecal fat content increased in the Roux-en-Y compared to the Jejunal loop interposition technique (P 0.05. Conclusion Roux-en-Y and Jejunal loop interposition techniques might be considered as suitable approaches for reconstructing gastro-intestinal tract following gastrectomy in dogs. The results of this study warrant further investigation with a larger number of animals.
Fast gradient-based methods for Bayesian reconstruction of transmission and emission PET images
International Nuclear Information System (INIS)
Mumcuglu, E.U.; Leahy, R.; Zhou, Z.; Cherry, S.R.
1994-01-01
The authors describe conjugate gradient algorithms for reconstruction of transmission and emission PET images. The reconstructions are based on a Bayesian formulation, where the data are modeled as a collection of independent Poisson random variables and the image is modeled using a Markov random field. A conjugate gradient algorithm is used to compute a maximum a posteriori (MAP) estimate of the image by maximizing over the posterior density. To ensure nonnegativity of the solution, a penalty function is used to convert the problem to one of unconstrained optimization. Preconditioners are used to enhance convergence rates. These methods generally achieve effective convergence in 15--25 iterations. Reconstructions are presented of an 18 FDG whole body scan from data collected using a Siemens/CTI ECAT931 whole body system. These results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors
Statistical image reconstruction methods for simultaneous emission/transmission PET scans
International Nuclear Information System (INIS)
Erdogan, H.; Fessler, J.A.
1996-01-01
Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods
Using the SAND-II and MLM methods to reconstruct fast neutron spectra
International Nuclear Information System (INIS)
Bondars, Kh.Ya.; Kamnev, V.A.; Lapenas, A.A.; Troshin, V.S.
1981-01-01
The reconstruction of fast neutron spectra from measured reaction rates may be reduced to the solution of Fredholm's integral equation of the first kind. This problem falls in the category of incorrectly formulated problems, and so additional information is required concerning the unknown function i.e. concerning the differential energy dependence of the neutron, flux density sup(phi)(E). There are various methods for seeking a solution to the problem as formulated above. One of the best-known methods used in the USSR is the maximum likelihood method (MLM) (or directional difference method (DDM)), whereas SAND-II is commonly used abroad. The purpose of this paper is to compare the MLM and SAND-II methods, taking as an example the processing of measurement data which were obtained in the B-2 beam line at the BR-10 reactor in order to determine the composition of shielding for a fast reactor
International Nuclear Information System (INIS)
Kollár, László E; Lucas, Gary P; Zhang, Zhichao
2014-01-01
An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning
International Nuclear Information System (INIS)
Morris, E.D.; Glide-Hurst, C.; Klahr, P.
2016-01-01
Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveform exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient
MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning
Energy Technology Data Exchange (ETDEWEB)
Morris, E.D.; Glide-Hurst, C. [Henry Ford Health System, Detroit, MI (United States); Wayne State University, Detroit, MI (United States); Klahr, P. [Philips Healthcare, Cleveland, Ohio (United States)
2016-06-15
Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveform exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient
Energy Technology Data Exchange (ETDEWEB)
Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
International Nuclear Information System (INIS)
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-01-01
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection
International Nuclear Information System (INIS)
Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Larin, Kirill V; Aglyamov, Salavat R; Twa, Michael D
2015-01-01
We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessment of biomechanical properties of tissues with micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of a proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. (paper)
a Method of 3d Measurement and Reconstruction for Cultural Relics in Museums
Zheng, S.; Zhou, Y.; Huang, R.; Zhou, L.; Xu, X.; Wang, C.
2012-07-01
Three-dimensional measurement and reconstruction during conservation and restoration of cultural relics have become an essential part of a modem museum regular work. Although many kinds of methods including laser scanning, computer vision and close-range photogrammetry have been put forward, but problems still exist, such as contradiction between cost and good result, time and fine effect. Aimed at these problems, this paper proposed a structure-light based method for 3D measurement and reconstruction of cultural relics in museums. Firstly, based on structure-light principle, digitalization hardware has been built and with its help, dense point cloud of cultural relics' surface can be easily acquired. To produce accurate 3D geometry model from point cloud data, multi processing algorithms have been developed and corresponding software has been implemented whose functions include blunder detection and removal, point cloud alignment and merge, 3D mesh construction and simplification. Finally, high-resolution images are captured and the alignment of these images and 3D geometry model is conducted and realistic, accurate 3D model is constructed. Based on such method, a complete system including hardware and software are built. Multi-kinds of cultural relics have been used to test this method and results prove its own feature such as high efficiency, high accuracy, easy operation and so on.
International Nuclear Information System (INIS)
Zhao, Weizhao; Ginsberg, M.; Young, T.Y.
1993-01-01
Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Implementation of a fast running full core pin power reconstruction method in DYN3D
International Nuclear Information System (INIS)
Gomez-Torres, Armando Miguel; Sanchez-Espinoza, Victor Hugo; Kliem, Sören; Gommlich, Andre
2014-01-01
Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions
Implementation of a fast running full core pin power reconstruction method in DYN3D
Energy Technology Data Exchange (ETDEWEB)
Gomez-Torres, Armando Miguel [Instituto Nacional de Investigaciones Nucleares, Department of Nuclear Systems, Carretera Mexico – Toluca s/n, La Marquesa, 52750 Ocoyoacac (Mexico); Sanchez-Espinoza, Victor Hugo, E-mail: victor.sanchez@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-vom-Helmhotz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Kliem, Sören; Gommlich, Andre [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany)
2014-07-01
Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions.
Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction
International Nuclear Information System (INIS)
Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee
2007-01-01
A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)
International Nuclear Information System (INIS)
1980-01-01
An apparatus is described which can be used in computerized tomographic systems for constructing a representation of an object and which uses a fan-shaped beam source, detectors and a convolution method of data reconstruction. (U.K.)
Elastic reflection waveform inversion with variable density
Li, Yuanyuan; Li, Zhenchun; Alkhalifah, Tariq Ali; Guo, Qiang
2017-01-01
Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion
Energy Technology Data Exchange (ETDEWEB)
Murphy, Martin J; Todor, Dorin A [Department of Radiation Oncology, Virginia Commonwealth University, Richmond VA 23298 (United States)
2005-06-07
By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.
Analytical method for reconstruction pin to pin of the nuclear power density distribution
Energy Technology Data Exchange (ETDEWEB)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Analytical method for reconstruction pin to pin of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2013-01-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Directory of Open Access Journals (Sweden)
Hongyang Lu
2016-01-01
Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction
Directory of Open Access Journals (Sweden)
Li Lei
2015-04-01
Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)
2016-02-15
In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)
International Nuclear Information System (INIS)
Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J.
2009-01-01
We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction.
Methods of reconstruction of perineal wounds after abdominoperineal resection. Literature review
Directory of Open Access Journals (Sweden)
S. S. Gordeev
2017-01-01
Full Text Available The problem of wound closure after abdominoperineal resection to treat oncological diseases remains unsolved. Formation of a primary suture in the perineal wound can lead to multiple postoperative complications: seroma, abscess, wound disruption with subsequent perineal hernia. Chemoradiation therapy as a standard for locally advanced rectal or anal cancer doesn’t improve results of treatment of perineal wounds and increases duration of their healing. Currently, surgeons have several reconstructive and plastic techniques to improve both direct and long-term functional treatment results. In the article, the most common methods of allo- and autotransplantation are considered, benefits and deficiencies of various techniques are evaluated and analyzed.
CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data
DEFF Research Database (Denmark)
Sharma, Ojaswa; Anton, François
2009-01-01
Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....
A STUDY ON DYNAMIC LOAD HISTORY RECONSTRUCTION USING PSEUDO-INVERSE METHODS
Santos, Ariane Rebelato Silva dos; Marczak, Rogério José
2017-01-01
Considering that the vibratory forces generally cannot be measured directly at the interface of two bodies, an inverse method is studied in the present work to recover the load history in such cases. The proposed technique attempts to reconstruct the dynamic loads history by using a frequency domain analysis and Moore-Penrose pseudo-inverses of the frequency response function (FRF) of the system. The methodology consists in applying discrete dynamic loads on a finite element model in the time...
International Nuclear Information System (INIS)
Dong, Xiangyuan; Guo, Shuqing
2008-01-01
In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-10-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses.
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
International Nuclear Information System (INIS)
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-01-01
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
International Nuclear Information System (INIS)
Kobayashi, Fujio; Yamaguchi, Shoichiro
1982-01-01
A method of the reconstruction of computed tomographic images was proposed to reduce the exposure dose to X-ray. The method is the small number of X-ray projection method by accelerative gradient method. The procedures of computation are described. The algorithm of these procedures is simple, the convergence of the computation is fast, and the required memory capacity is small. Numerical simulation was carried out to conform the validity of this method. A sample of simple shape was considered, projection data were given, and the images were reconstructed from 6 views. Good results were obtained, and the method is considered to be useful. (Kato, T.)
Best waveform score for diagnosing keratoconus
Directory of Open Access Journals (Sweden)
Allan Luz
2013-12-01
Full Text Available PURPOSE: To test whether corneal hysteresis (CH and corneal resistance factor (CRF can discriminate between keratoconus and normal eyes and to evaluate whether the averages of two consecutive measurements perform differently from the one with the best waveform score (WS for diagnosing keratoconus. METHODS: ORA measurements for one eye per individual were selected randomly from 53 normal patients and from 27 patients with keratoconus. Two groups were considered the average (CH-Avg, CRF-Avg and best waveform score (CH-WS, CRF-WS groups. The Mann-Whitney U-test was used to evaluate whether the variables had similar distributions in the Normal and Keratoconus groups. Receiver operating characteristics (ROC curves were calculated for each parameter to assess the efficacy for diagnosing keratoconus and the same obtained for each variable were compared pairwise using the Hanley-McNeil test. RESULTS: The CH-Avg, CRF-Avg, CH-WS and CRF-WS differed significantly between the normal and keratoconus groups (p<0.001. The areas under the ROC curve (AUROC for CH-Avg, CRF-Avg, CH-WS, and CRF-WS were 0.824, 0.873, 0.891, and 0.931, respectively. CH-WS and CRF-WS had significantly better AUROCs than CH-Avg and CRF-Avg, respectively (p=0.001 and 0.002. CONCLUSION: The analysis of the biomechanical properties of the cornea through the ORA method has proved to be an important aid in the diagnosis of keratoconus, regardless of the method used. The best waveform score (WS measurements were superior to the average of consecutive ORA measurements for diagnosing keratoconus.
Plasma shape reconstruction of merging spherical tokamak based on modified CCS method
Ushiki, Tomohiko; Inomoto, Michiaki; Itagaki, Masafumi; McNamara, Steven
2017-10-01
The merging start-up method is the one of the CS-free start-up schemes that has the advantage of high plasma temperature and density because it involves reconnection heating and compression processes. In order to achieve optimal merging operations, the initial two STs should have identical plasma currents and shapes, and then move symmetrically toward the center of the device with appropriate velocity. Furthermore, from the viewpoint of the compression effect, controlling the plasma major radius is also important. To realize the active feedback control of the plasma currents, the positions, and the shapes of the two initial STs and to optimize the plasma parameters described above, accurate estimation of the plasma boundary shape is highly important. In the present work, the Modified-CCS method is demonstrated to reconstruct the plasma boundary shapes as well as the eddy current profiles in the UTST (The University of Tokyo) and ST40 device (Tokamak Energy Ltd). The present research results demonstrate the effectiveness of the M-CCS method in the reconstruction analyses of ST merging.
International Nuclear Information System (INIS)
Guedouar, R.; Zarrad, B.
2010-01-01
For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.
Energy Technology Data Exchange (ETDEWEB)
Guedouar, R., E-mail: raja_guedouar@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia); Zarrad, B., E-mail: boubakerzarrad@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia)
2010-07-21
For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.
Evaluation of the reconstruction method and effect of partial volume in brain scintiscanning
International Nuclear Information System (INIS)
Pinheiro, Monica Araujo
2016-01-01
Alzheimer's disease is a neurodegenerative disorder, on which occurs a progressive and irreversible destruction of neurons. According to the World Health Organization (WHO) 35.6 million people are living with dementia, being recommended that governments prioritize early diagnosis techniques. Laboratory and psychological tests for cognitive assessment are conducted and further complemented by neurological imaging from nuclear medicine exams in order to establish an accurate diagnosis. The image quality evaluation and reconstruction process effects are important tools in clinical routine. In the present work, these quality parameters were studied, and the effects of partial volume (PVE) for lesions of different sizes and geometries that are attributed to the limited resolution of the equipment. In dementia diagnosis, this effect can be confused with intake losses due to cerebral cortex atrophy. The evaluation was conducted by two phantoms of different shapes as suggested by (a) American College of Radiology (ACR) and (b) National Electrical Manufacturers Association (NEMA) for Contrast, Contrast-to-Noise Ratio (CNR) and Recovery Coefficient (RC) calculation versus lesions shape and size. Technetium-99m radionuclide was used in a local brain scintigraphy protocol, for proportions lesion to background of 2:1, 4:1, 6:1, 8:1 and 10:1. Fourteen reconstruction methods were used for each concentration applying different filters and algorithms. Before the analysis of all image properties, the conclusion is that the predominant effect is the partial volume, leading to errors of measurement of more than 80%. Furthermore, it was demonstrate that the most effective method of reconstruction is FBP with Metz filter, providing better contrast and contrast to noise ratio results. In addition, this method shows the best Recovery Coefficients correction for each lesion. The ACR phantom showed the best results assigned to a more precise reconstruction of a cylinder, which does not
3D ultrasound computer tomography: Hardware setup, reconstruction methods and first clinical results
Gemmeke, Hartmut; Hopp, Torsten; Zapf, Michael; Kaiser, Clemens; Ruiter, Nicole V.
2017-11-01
A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3D USCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3D USCT, realizing for the first time the full benefits of a 3D system. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3 T MRI volume. Important for the obtained resolution are the simultaneously obtained results of the transmission tomography. The KIT 3D USCT was then tested in a pilot study on ten patients. The primary goals of the pilot study were to test the USCT device, the data acquisition protocols, the image reconstruction methods and the image fusion techniques in a clinical environment. The study was conducted successfully; the data acquisition could be carried out for all patients with an average imaging time of six minutes per breast. The reconstructions provide promising images. Overlaid volumes of the modalities show qualitative and quantitative information at a glance. This paper gives a summary of the involved techniques, methods, and first results.
An innovative method of planning and displaying flap volume in DIEP flap breast reconstructions.
Hummelink, S; Verhulst, Arico C; Maal, Thomas J J; Hoogeveen, Yvonne L; Schultze Kool, Leo J; Ulrich, Dietmar J O
2017-07-01
Determining the ideal volume of the harvested flap to achieve symmetry in deep inferior epigastric artery perforator (DIEP) flap breast reconstructions is complex. With preoperative imaging techniques such as 3D stereophotogrammetry and computed tomography angiography (CTA) available nowadays, we can combine information to preoperatively plan the optimal flap volume to be harvested. In this proof-of-concept, we investigated whether projection of a virtual flap planning onto the patient's abdomen using a projection method could result in harvesting the correct flap volume. In six patients (n = 9 breasts), 3D stereophotogrammetry and CTA data were combined from which a virtual flap planning was created comprising perforator locations, blood vessel trajectory and flap size. All projected perforators were verified with Doppler ultrasound. Intraoperative flap measurements were collected to validate the determined flap delineation volume. The measured breast volume using 3D stereophotogrammetry was 578 ± 127 cc; on CTA images, 527 ± 106 cc flap volumes were planned. The nine harvested flaps weighed 533 ± 109 g resulting in a planned versus harvested flap mean difference of 5 ± 27 g (flap density 1.0 g/ml). In 41 out of 42 projected perforator locations, a Doppler signal was audible. This proof-of-concept shows in small numbers that flap volumes can be included into a virtual DIEP flap planning, and transferring the virtual planning to the patient through a projection method results in harvesting approximately the same volume during surgery. In our opinion, this innovative approach is the first step in consequently achieving symmetric breast volumes in DIEP flap breast reconstructions. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Sadleir, R J; Zhang, S U; Tucker, A S; Oh, Sungho
2008-08-01
Electrical impedance tomography (EIT) is particularly well-suited to applications where its portability, rapid acquisition speed and sensitivity give it a practical advantage over other monitoring or imaging systems. An EIT system's patient interface can potentially be adapted to match the target environment, and thereby increase its utility. It may thus be appropriate to use different electrode positions from those conventionally used in EIT in these cases. One application that may require this is the use of EIT on emergency medicine patients; in particular those who have suffered blunt abdominal trauma. In patients who have suffered major trauma, it is desirable to minimize the risk of spinal cord injury by avoiding lifting them. To adapt EIT to this requirement, we devised and evaluated a new electrode topology (the 'hemiarray') which comprises a set of eight electrodes placed only on the subject's anterior surface. Images were obtained using a two-dimensional sensitivity matrix and weighted singular value decomposition reconstruction. The hemiarray method's ability to quantify bleeding was evaluated by comparing its performance with conventional 2D reconstruction methods using data gathered from a saline phantom. We found that without applying corrections to reconstructed images it was possible to estimate blood volume in a two-dimensional hemiarray case with an uncertainty of around 27 ml. In an approximately 3D hemiarray case, volume prediction was possible with a maximum uncertainty of around 38 ml in the centre of the electrode plane. After application of a QI normalizing filter, average uncertainties in a two-dimensional hemiarray case were reduced to about 15 ml. Uncertainties in the approximate 3D case were reduced to about 30 ml.
Directory of Open Access Journals (Sweden)
Tokuhashi Yasuaki
2010-05-01
Full Text Available Abstract Background In the reconstruction of the affected limb in pediatric malignant bone tumors, since the loss of joint function affects limb-length discrepancy expected in the future, reconstruction methods that not only maximally preserve the joint function but also maintain good limb function are necessary. We analysis limb function of reconstruction methods by tumor location following resection of pediatric malignant bone tumors. Patients and methods We classified the tumors according to their location into 3 types by preoperative MRI, and evaluated reconstruction methods after wide resection, paying attention to whether the joint function could be preserved. The mean age of the patients was 10.6 years, Osteosarcoma was observed in 26 patients, Ewing's sarcoma in 3, and PNET(primitive neuroectodermal tumor and chondrosarcoma (grade 1 in 1 each. Results Type I were those located in the diaphysis, and reconstruction was performed using a vascularized fibular graft(vascularized fibular graft. Type 2 were those located in contact with the epiphyseal line or within 1 cm from this line, and VFG was performed in 1, and distraction osteogenesis in 1. Type III were those extending from the diaphysis to the epiphysis beyond the epiphyseal line, and a Growing Kotz was mainly used in 10 patients. The mean functional assessment score was the highest for Type I (96%: n = 4 according to the type and for VFG (99% according to the reconstruction method. Conclusion The final functional results were the most satisfactory for Types I and II according to tumor location. Biological reconstruction such as VFG and distraction osteogenesis without a prosthesis are so high score in the MSTS rating system. Therefore, considering the function of the affected limb, a limb reconstruction method allowing the maximal preservation of joint function should be selected after careful evaluation of the effects of chemotherapy and the location of the tumor.
Analysis of the methods and reconstruction of iodine doses from the Chernobyl release in Belarus
International Nuclear Information System (INIS)
Lutsko, A.; Krivoruchko, K.; Gribov, A.
1997-01-01
The paper considers the method of reconstructing the iodine-131 fallout based on the systematic exposure measurements. The measurements used were taken with standard DP-5 dosimeters at the monitoring sites of the State Hydrometeorological Service network. These data have been collected since the Chernobyl NPP accident. A short-living exponent has been deduced from the exposure dose dying away. Maps of the iodine-131 release in the period of May 1 - 31, 1996 have been constructed in the attempt to estimate the doses for the initial period of the accident. The paper also dwells on the intricacy of the problem and the refinements to be made for the dose commitments with allowance for a continuing release and meteorological changes that are comparable in time with the half-life events of iodine 131 decay. A comparative analysis has been made of various methods of the dose reconstruction. The results obtained are compared with the maps of the increased incidence of the thyroid gland cancer in adults and children in Belarus. (authors) 14 refs., 2 tabs., 4 figs
Three-dimensional reconstruction volume: a novel method for volume measurement in kidney cancer.
Durso, Timothy A; Carnell, Jonathan; Turk, Thomas T; Gupta, Gopal N
2014-06-01
The role of volumetric estimation is becoming increasingly important in the staging, management, and prognostication of benign and cancerous conditions of the kidney. We evaluated the use of three-dimensional reconstruction volume (3DV) in determining renal parenchymal volumes (RPV) and renal tumor volumes (RTV). We compared 3DV with the currently available methods of volume assessment and determined its interuser reliability. RPV and RTV were assessed in 28 patients who underwent robot-assisted laparoscopic partial nephrectomy for kidney cancer. Patients with a preoperative creatinine level of kidney pre- and postsurgery overestimated 3D reconstruction volumes by 15% to 102% and 12% to 101%, respectively. In addition, volumes obtained from 3DV displayed high interuser reliability regardless of experience. 3DV provides a highly reliable way of assessing kidney volumes. Given that 3DV takes into account visible anatomy, the differences observed using previously published methods can be attributed to the failure of geometry to accurately approximate kidney or tumor shape. 3DV provides a more accurate, reproducible, and clinically useful tool for urologists looking to improve patient care using analysis related to volume.
Study of reconstruction methods for a time projection chamber with GEM gas amplification system
Energy Technology Data Exchange (ETDEWEB)
Diener, R.
2006-12-15
A new e{sup +}e{sup -} linear collider with an energy range up to 1TeV is planned in an international collaboration: the International Linear Collider (ILC). This collider will be able to do precision measurements of the Higgs particle and of physics beyond the Standard Model. In the Large Detector Concept (LDC) - which is one proposal for a detector at the ILC - a Time Projection Chamber (TPC) is foreseen as the main tracking device. To meet the requirements on the resolution and to be able to work in the environment at the ILC, the application of new gas amplification technologies in the TPC is necessary. One option is an amplification system based on Gas Electron Multipliers (GEMs). Due to the - in comparison with older technologies - small spatial width of the signals, this technology poses new requirements on the readout structures and the reconstruction methods. In this work, the performance and the systematics of different reconstruction methods have been studied, based on data measured with a TPC prototype in high magnetic fields of up to 4T and data from a Monte Carlo simulation. The latest results of the achievable point resolution are presented and their limitations have been investigated. (orig.)
Community Phylogenetics: Assessing Tree Reconstruction Methods and the Utility of DNA Barcodes
Boyle, Elizabeth E.; Adamowicz, Sarah J.
2015-01-01
Studies examining phylogenetic community structure have become increasingly prevalent, yet little attention has been given to the influence of the input phylogeny on metrics that describe phylogenetic patterns of co-occurrence. Here, we examine the influence of branch length, tree reconstruction method, and amount of sequence data on measures of phylogenetic community structure, as well as the phylogenetic signal (Pagel’s λ) in morphological traits, using Trichoptera larval communities from Churchill, Manitoba, Canada. We find that model-based tree reconstruction methods and the use of a backbone family-level phylogeny improve estimations of phylogenetic community structure. In addition, trees built using the barcode region of cytochrome c oxidase subunit I (COI) alone accurately predict metrics of phylogenetic community structure obtained from a multi-gene phylogeny. Input tree did not alter overall conclusions drawn for phylogenetic signal, as significant phylogenetic structure was detected in two body size traits across input trees. As the discipline of community phylogenetics continues to expand, it is important to investigate the best approaches to accurately estimate patterns. Our results suggest that emerging large datasets of DNA barcode sequences provide a vast resource for studying the structure of biological communities. PMID:26110886
Study of reconstruction methods for a time projection chamber with GEM gas amplification system
International Nuclear Information System (INIS)
Diener, R.
2006-12-01
A new e + e - linear collider with an energy range up to 1TeV is planned in an international collaboration: the International Linear Collider (ILC). This collider will be able to do precision measurements of the Higgs particle and of physics beyond the Standard Model. In the Large Detector Concept (LDC) - which is one proposal for a detector at the ILC - a Time Projection Chamber (TPC) is foreseen as the main tracking device. To meet the requirements on the resolution and to be able to work in the environment at the ILC, the application of new gas amplification technologies in the TPC is necessary. One option is an amplification system based on Gas Electron Multipliers (GEMs). Due to the - in comparison with older technologies - small spatial width of the signals, this technology poses new requirements on the readout structures and the reconstruction methods. In this work, the performance and the systematics of different reconstruction methods have been studied, based on data measured with a TPC prototype in high magnetic fields of up to 4T and data from a Monte Carlo simulation. The latest results of the achievable point resolution are presented and their limitations have been investigated. (orig.)
International Nuclear Information System (INIS)
Corones, J.; Sun, Z.
1993-01-01
This paper extends the time domain wave splitting and invariant imbedding method to an inhomogeneous wave equation with a source term: u xx -u tt +A(x)u x =2D(x)i'(t). The direct scattering and inverse source problems of this equation are studied. Operators J ± that map the source function into the scattered waves at the edges of the slab are defined. A system of coupled nonlinear integrodifferential equations for these scattering operator kernels is obtained. The direct scattering problem is to obtain the scattering operator kernels J ± and R + when parameters A and D are given. The inverse problem is to simultaneously reconstruct A(x) and D(x) from the scattering operator kernels R + (0,t), 0≤t≤2 and J - (0,t), 0≤t≤1. Both numerical inversion algorithms and the small time approximate reconstruction method are presented. A Green's function technique is used to derive Green's operator kernel equations for the calculation of the internal field. It provides an alternative effective and fast way to compute the scattering kernels J ± . For constant A and D the Green's operator kernels and source scattering kernels are expressed in closed form. Several numerical examples are given
Akhmadaliev, S Z; Ambrosini, G; Amorim, A; Anderson, K; Andrieux, M L; Aubert, Bernard; Augé, E; Badaud, F; Baisin, L; Barreiro, F; Battistoni, G; Bazan, A; Bazizi, K; Belymam, A; Benchekroun, D; Berglund, S R; Berset, J C; Blanchot, G; Bogush, A A; Bohm, C; Boldea, V; Bonivento, W; Bosman, M; Bouhemaid, N; Breton, D; Brette, P; Bromberg, C; Budagov, Yu A; Burdin, S V; Calôba, L P; Camarena, F; Camin, D V; Canton, B; Caprini, M; Carvalho, J; Casado, M P; Castillo, M V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Chadelas, R; Chalifour, M; Chekhtman, A; Chevalley, J L; Chirikov-Zorin, I E; Chlachidze, G; Citterio, M; Cleland, W E; Clément, C; Cobal, M; Cogswell, F; Colas, Jacques; Collot, J; Cologna, S; Constantinescu, S; Costa, G; Costanzo, D; Crouau, M; Daudon, F; David, J; David, M; Davidek, T; Dawson, J; De, K; de La Taille, C; Del Peso, J; Del Prete, T; de Saintignon, P; Di Girolamo, B; Dinkespiler, B; Dita, S; Dodd, J; Dolejsi, J; Dolezal, Z; Downing, R; Dugne, J J; Dzahini, D; Efthymiopoulos, I; Errede, D; Errede, S; Evans, H; Eynard, G; Fassi, F; Fassnacht, P; Ferrari, A; Ferrer, A; Flaminio, Vincenzo; Fournier, D; Fumagalli, G; Gallas, E; Gaspar, M; Giakoumopoulou, V; Gianotti, F; Gildemeister, O; Giokaris, N; Glagolev, V; Glebov, V Yu; Gomes, A; González, V; González de la Hoz, S; Grabskii, V; Graugès-Pous, E; Grenier, P; Hakopian, H H; Haney, M; Hébrard, C; Henriques, A; Hervás, L; Higón, E; Holmgren, Sven Olof; Hostachy, J Y; Hoummada, A; Huston, J; Imbault, D; Ivanyushenkov, Yu M; Jézéquel, S; Johansson, E K; Jon-And, K; Jones, R; Juste, A; Kakurin, S; Karyukhin, A N; Khokhlov, Yu A; Khubua, J I; Klioukhine, V I; Kolachev, G M; Kopikov, S V; Kostrikov, M E; Kozlov, V; Krivkova, P; Kukhtin, V V; Kulagin, M; Kulchitskii, Yu A; Kuzmin, M V; Labarga, L; Laborie, G; Lacour, D; Laforge, B; Lami, S; Lapin, V; Le Dortz, O; Lefebvre, M; Le Flour, T; Leitner, R; Leltchouk, M; Li, J; Liablin, M V; Linossier, O; Lissauer, D; Lobkowicz, F; Lokajícek, M; Lomakin, Yu F; López-Amengual, J M; Lund-Jensen, B; Maio, A; Makowiecki, D S; Malyukov, S N; Mandelli, L; Mansoulié, B; Mapelli, Livio P; Marin, C P; Marrocchesi, P S; Marroquim, F; Martin, P; Maslennikov, A L; Massol, N; Mataix, L; Mazzanti, M; Mazzoni, E; Merritt, F S; Michel, B; Miller, R; Minashvili, I A; Miralles, L; Mnatzakanian, E A; Monnier, E; Montarou, G; Mornacchi, Giuseppe; Moynot, M; Muanza, G S; Nayman, P; Némécek, S; Nessi, Marzio; Nicoleau, S; Niculescu, M; Noppe, J M; Onofre, A; Pallin, D; Pantea, D; Paoletti, R; Park, I C; Parrour, G; Parsons, J; Pereira, A; Perini, L; Perlas, J A; Perrodo, P; Pilcher, J E; Pinhão, J; Plothow-Besch, Hartmute; Poggioli, Luc; Poirot, S; Price, L; Protopopov, Yu; Proudfoot, J; Puzo, P; Radeka, V; Rahm, David Charles; Reinmuth, G; Renzoni, G; Rescia, S; Resconi, S; Richards, R; Richer, J P; Roda, C; Rodier, S; Roldán, J; Romance, J B; Romanov, V; Romero, P; Rossel, F; Rusakovitch, N A; Sala, P; Sanchis, E; Sanders, H; Santoni, C; Santos, J; Sauvage, D; Sauvage, G; Sawyer, L; Says, L P; Schaffer, A C; Schwemling, P; Schwindling, J; Seguin-Moreau, N; Seidl, W; Seixas, J M; Selldén, B; Seman, M; Semenov, A; Serin, L; Shaldaev, E; Shochet, M J; Sidorov, V; Silva, J; Simaitis, V J; Simion, S; Sissakian, A N; Snopkov, R; Söderqvist, J; Solodkov, A A; Soloviev, A; Soloviev, I V; Sonderegger, P; Soustruznik, K; Spanó, F; Spiwoks, R; Stanek, R; Starchenko, E A; Stavina, P; Stephens, R; Suk, M; Surkov, A; Sykora, I; Takai, H; Tang, F; Tardell, S; Tartarelli, F; Tas, P; Teiger, J; Thaler, J; Thion, J; Tikhonov, Yu A; Tisserant, S; Tokar, S; Topilin, N D; Trka, Z; Turcotte, M; Valkár, S; Varanda, M J; Vartapetian, A H; Vazeille, F; Vichou, I; Vinogradov, V; Vorozhtsov, S B; Vuillemin, V; White, A; Wielers, M; Wingerter-Seez, I; Wolters, H; Yamdagni, N; Yosef, C; Zaitsev, A; Zitoun, R; Zolnierowski, Y
2002-01-01
This paper discusses hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter (consisting of a lead-liquid argon electromagnetic part and an iron-scintillator hadronic part) in the framework of the nonparametrical method. The nonparametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to an easy use in a first level trigger. The reconstructed mean values of the hadron energies are within +or-1% of the true values and the fractional energy resolution is [(58+or-3)%/ square root E+(2.5+or-0.3)%](+)(1.7+or-0.2)/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74+or-0.04 and agrees with the prediction that e/h >1.66 for this electromagnetic calorimeter. Results of a study of the longitudinal hadronic shower development are also presented. The data have been taken in the H8 beam...
Least-square NUFFT methods applied to 2-D and 3-D radially encoded MR image reconstruction.
Song, Jiayu; Liu, Yanhui; Gewalt, Sally L; Cofer, Gary; Johnson, G Allan; Liu, Qing Huo
2009-04-01
Radially encoded MRI has gained increasing attention due to its motion insensitivity and reduced artifacts. However, because its samples are collected nonuniformly in the k-space, multidimensional (especially 3-D) radially sampled MRI image reconstruction is challenging. The objective of this paper is to develop a reconstruction technique in high dimensions with on-the-fly kernel calculation. It implements general multidimensional nonuniform fast Fourier transform (NUFFT) algorithms and incorporates them into a k-space image reconstruction framework. The method is then applied to reconstruct from the radially encoded k-space data, although the method is applicable to any non-Cartesian patterns. Performance comparisons are made against the conventional Kaiser-Bessel (KB) gridding method for 2-D and 3-D radially encoded computer-simulated phantoms and physically scanned phantoms. The results show that the NUFFT reconstruction method has better accuracy-efficiency tradeoff than the KB gridding method when the kernel weights are calculated on the fly. It is found that for a particular conventional kernel function, using its corresponding deapodization function as a scaling factor in the NUFFT framework has the potential to improve accuracy. In particular, when a cosine scaling factor is used, the NUFFT method is faster than KB gridding method since a closed-form solution is available and is less computationally expensive than the KB kernel (KB griding requires computation of Bessel functions). The NUFFT method has been successfully applied to 2-D and 3-D in vivo studies on small animals.
Zhang, W W; Wang, H G; Shi, X J; Chen, M Y; Lu, S C
2016-09-01
To discuss the significance of three-dimensional reconstruction as a method of preoperative planning of laparoscopic radiofrequency ablation(LRFA). Thirty-two cases of LRFA admitted from January 2014 to December 2015 in Department of Hepatobiliary Surgery, Chinese People's Liberation Army General Hospital were analyzed(3D-LRFA group). Three-dimensional(3D) reconstruction were taken as a method of preoperative planning in 3D-LRFA group.Other 64 LRFA cases were paired over the same period without three-dimensional reconstruction before the operation (LRFA group). Hepatobiliary system contrast enhanced CT scan of 3D-RFA patients were taken by multi-slice spiral computed tomography(MSCT), and the DICOM data were processed by IQQA(®)-Liver and IQQA(®)-guide to make 3D reconstruction.Using 3D reconstruction model, diameter and scope of tumor were measured, suitable size (length and radiofrequency length) and number of RFA electrode were chosen, scope and effect of radiofrequency were simulated, reasonable needle track(s) was planed, position and angle of laparoscopic ultrasound (LUS) probe was designed and LUS image was simulated.Data of operation and recovery were collected and analyzed. Data between two sets of measurement data were compared with t test or rank sum test, and count data with χ(2) test or Fisher exact probability test.Tumor recurrence rate was analyzed with the Kaplan-Meier survival curve and Log-rank (Mantel-Cox) test. Compared with LRFA group ((216.8±66.2) minutes, (389.1±183.4) s), 3D-LRFA group ((173.3±59.4) minutes, (242.2±90.8) s) has shorter operation time(t=-3.138, P=0.002) and shorter mean puncture time(t=-2.340, P=0.021). There was no significant difference of blood loss(P=0.170), ablation rate (P=0.871) and incidence of complications(P=1.000). Compared with LRFA group ((6.3±3.9)days, (330±102)U/L, (167±64)ng/L), 3D-LRFA group ((4.3±3.1) days, (285±102) U/L, (139±43) ng/L) had shorter post-operative stay(t=-2.527, P=0.016), less
Breast EIT using a new projected image reconstruction method with multi-frequency measurements.
Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je
2012-05-01
We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography(EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configurational lows us to measure induced voltages only on the front surface of the breast,they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast.
Breast EIT using a new projected image reconstruction method with multi-frequency measurements
International Nuclear Information System (INIS)
Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je
2012-01-01
We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography (EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configuration allows us to measure induced voltages only on the front surface of the breast, they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast. (paper)
Reconstruction of thin electromagnetic inclusions by a level-set method
International Nuclear Information System (INIS)
Park, Won-Kwang; Lesselier, Dominique
2009-01-01
In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves
DEFF Research Database (Denmark)
Guarato, Francesco; Hallam, John
2010-01-01
Understanding and modeling bat biosonar behavior should take into account what the bat actually emitted while exploring the surrounding environment. Recording of the bat calls could be performed by means of a telemetry system small enough to sit on the bat head, though filtering due to bat...... directivity affects recordings and not all bat species are able to carry such a device. Instead, remote microphone recordings of the bat calls could be processed by means of a mathematical method that estimates bat head orientation as a first step before calculating the amplitudes of each call for each...... and discussed. A further improvement of the method is necessary as its performance for call reconstruction strongly depends on correct choice of the sample at which the recorded call is thought to start in each microphone data set....
Phase reconstruction by a multilevel iteratively regularized Gauss–Newton method
International Nuclear Information System (INIS)
Langemann, Dirk; Tasche, Manfred
2008-01-01
In this paper we consider the numerical solution of a phase retrieval problem for a compactly supported, linear spline f : R → C with the Fourier transform f-circumflex, where values of |f| and |f-circumflex| at finitely many equispaced nodes are given. The unknown phases of complex spline coefficients fulfil a well-structured system of nonlinear equations. Thus the phase reconstruction leads to a nonlinear inverse problem, which is solved by a multilevel strategy and iterative Tikhonov regularization. The multilevel strategy concentrates the main effort of the solution of the phase retrieval problem in the coarse, less expensive levels and provides convenient initial guesses at the next finer level. On each level, the corresponding nonlinear system is solved by an iteratively regularized Gauss–Newton method. The multilevel strategy is motivated by convergence results of IRGN. This method is applicable to a wide range of examples as shown in several numerical tests for noiseless and noisy data
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
A. Goswami; P. L. Olson; L. A. Hinnov; A. Gnanadesikan
2015-01-01
We present a method for reconstructing global ocean bathymetry that uses a plate cooling model for the oceanic lithosphere, the age distribution of the oceanic crust, global oceanic sediment thicknesses, plus shelf-slope-rise structures calibrated at modern active and passive continental margins. Our motivation is to reconstruct realistic ocean bathymetry based on parameterized relationships of present-day variables that can be applied to global oceans in th...
Charge reconstruction in large-area photomultipliers
Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.
2018-02-01
Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].
MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method
International Nuclear Information System (INIS)
Chen, Z; Qi, H; Wu, S; Xu, Y; Zhou, L
2016-01-01
Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotational invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74
Closed-loop waveform control of boost inverter
DEFF Research Database (Denmark)
Zhu, Guo Rong; Xiao, Cheng Yuan; Wang, Haoran
2016-01-01
The input current of single-phase inverter typically has an AC ripple component at twice the output frequency, which causes a reduction in both the operating lifetime of its DC source and the efficiency of the system. In this paper, the closed-loop performance of a proposed waveform control method...... to eliminate such a ripple current in boost inverter is investigated. The small-signal stability and the dynamic characteristic of the inverter system for input voltage or wide range load variations under the closed-loop waveform control method are studied. It is validated that with the closedloop waveform...... control, not only was stability achieved, the reference voltage of the boost inverter capacitors can be instantaneously adjusted to match the new load, thereby achieving improved ripple mitigation for a wide load range. Furthermore, with the control and feedback mechanism, there is minimal level of ripple...
Analysis of Gradient Waveform in Magnetic Resonance Imaging
Directory of Open Access Journals (Sweden)
OU-YANG Shan-mei
2017-12-01
Full Text Available The accuracy of gradient pulse waveform affects image quality significantly in magnetic resonance imaging (MRI. Recording and analyzing the waveform of gradient pulse helps to make rapid and accurate diagnosis of spectrometer gradient hardware and/or pulse sequence. Using the virtual instrument software LabVIEW to control the high speed data acquisition card DAQ-2005, a multi-channel acquisition scheme was designed to collect the gradient outputs from a custom-made spectrometer. The collected waveforms were post-processed (i.e., histogram statistical analysis, data filtering and difference calculation to obtain feature points containing time and amplitude information. Experiments were carried out to validate the method, which is an auxiliary test method for the development of spectrometer and pulses sequence.
Jeong, Seung Jun; Hong, Chung Ki
2008-06-01
We present an effective method for the pixel-size-maintained reconstruction of images on arbitrarily tilted planes in digital holography. The method is based on the plane wave expansion of the diffraction wave fields and the three-axis rotation of the wave vectors. The images on the tilted planes are reconstructed without loss of the frequency contents of the hologram and have the same pixel sizes. Our method shows good results in the extreme cases of large tilting angles and in the region closer than the paraxial case. The effectiveness of the method is demonstrated by both simulation and experiment.
International Nuclear Information System (INIS)
Kaneko, Takeshi; Takagi, Masachika; Kato, Ryohei; Anno, Hirofumi; Kobayashi, Masanao; Yoshimi, Satoshi; Sanda, Yoshihiro; Katada, Kazuhiro
2012-01-01
The purpose of this study was to design and construct a phantom for using motion artifact in the electrocardiogram (ECG)-gated reconstruction image. In addition, the temporal resolution under various conditions was estimated. A stepping motor was used to move the phantom over an arc in a reciprocating manner. The program for controlling the stepping motor permitted the stationary period and the heart rate to be adjusted as desired. Images of the phantom were obtained using a 320-row area-detector computed tomography (ADCT) system under various conditions using the ECG-gated reconstruction method. For estimation, the reconstruction phase was continuously changed and the motion artifacts were quantitatively assessed. The temporal resolution was calculated from the number of motion-free images. Changes in the temporal resolution according to heart rate, rotation time, the number of reconstruction segments and acquisition position in z-axis were also investigated. The measured temporal resolution of ECG-gated half reconstruction is 180 ms, which is in good agreement with the nominal temporal resolution of 175 ms. The measured temporal resolution of ECG-gated segmental reconstruction is in good agreement with the nominal temporal resolution in most cases. The estimated temporal resolution improved to approach the nominal temporal resolution as the number of reconstruction segments was increased. Temporal resolution in changing acquisition position is equal. This study shows that we could design a new phantom for estimating temporal resolution. (author)
Experimental validation of waveform relaxation technique for power ...
Indian Academy of Sciences (India)
damping controller drawn our attention to a potential convergence problem which ... method was originally proposed as a method of parallelizing the numerical integration of very. Figure 2 ..... to it the features of an industrial real-time operating system. ..... Odeh F and Ruehli A 1985 Waveform relaxation: Theory and practice.
A method for reconstructing the development of the sapwood area of balsam fir.
Coyea, M R; Margolis, H A; Gagnon, R R
1990-09-01
Leaf area is commonly estimated as a function of sapwood area. However, because sapwood changes to heartwood over time, it has not previously been possible to reconstruct either the sapwood area or the leaf area of older trees into the past. In this study, we report a method for reconstructing the development of the sapwood area of dominant and codominant balsam fir (Abies balsamea (L.) Mill.). The technique is based on establishing a species-specific relationship between the number of annual growth rings in the sapwood area and tree age. Because the number of annual growth rings in the sapwood of balsam fir at a given age was found to be independent of site quality and stand density, the number of rings in sapwood (NRS) can be predicted from the age of a tree thus: NRS = 14.818 (1 - e(-0.031 age)), unweighted R(2) = 0.80, and NRS = 2.490 (1 - e(-0.038 age)), unweighted R(2) = 0.64, for measurements at breast height and at the base of the live crown, respectively. These nonlinear asymptotic regression models based only on age, were not improved by adding other tree variables such as diameter at breast height, diameter at the base of the live crown, total tree height or percent live crown.
International Nuclear Information System (INIS)
Viana, Rodrigo Sartorelo Salemi
2014-01-01
The NSECT (Neutron Stimulated Emission Computed Tomography) figures as a new spectrographic technique able to evaluate in vivo the concentration of elements using the inelastic scattering reaction (n,n'). Since its introduction, several improvements have been proposed with the aim of investigating applications for clinical diagnosis and reduction of absorbed dose associated with CT acquisition. In this context, two new diagnostic applications are presented using spectroscopic and tomographic approaches from NSECT. A new methodology has also been proposed to optimize the sinogram sampling that is directly related to the quality of the reconstruction by the irradiation protocol. The studies were developed based on simulations with MCNP5 code. Diagnosis of Renal Cell Carcinoma (RCC) and the detection of breast microcalcifications were evaluated in studies conducted using a human phantom. The obtained results demonstrate the ability of the NSECT technique to detect changes in the composition of the modeled tissues as a function of the development of evaluated pathologies. The proposed method for optimizing sinograms was able to analytically simulate the composition of the irradiated medium allowing the assessment of quality of reconstruction and effective dose in terms of the sampling rate. However, future research must be conducted to quantify the sensitivity of detection according to the selected elements. (author)
An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.
Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua
2015-01-01
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.
International Nuclear Information System (INIS)
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Forte, Vincenzo; Ferrero Colomo, Alvaro; CERN. Geneva. ATS Department
2018-01-01
In the framework of the LHC Injectors Upgrade (LIU) project [1], this document summarises the beam-based measurement of the magnetic waveform of the PS injection kicker KFA45 [2], from data collected during several Machine Development (MD) sessions in 2016 and 2017. In the ﬁrst part of the document, the measurement methodology is introduced and the results presented and compared with the speciﬁcation required for a clean transfer of the bunches coming from the PSB after the upgrade. These measurements represent, to date, the only way to reconstruct the magnetic waveform. In the second part, kicker magnetic waveform PSpice®[3] simulations are compared and tuned to the measurements. Finally the simulated (validated through measurements) waveforms are used to estimate the future expected emittance growth for the diﬀerent PS injection schemes, both for (LIU target) LHC and ﬁxed target beams.
3D RECONSTRUCTION FROM MULTI-VIEW MEDICAL X-RAY IMAGES – REVIEW AND EVALUATION OF EXISTING METHODS
Directory of Open Access Journals (Sweden)
S. Hosseinian
2015-12-01
Full Text Available The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT scan and magnetic resonance imaging (MRI have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT. Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
Elastic reflection waveform inversion with variable density
Li, Yuanyuan
2017-08-17
Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion (RWI) provides a method to build a good background model, which can serve as an initial model for elastic FWI. Therefore, we introduce the concept of RWI for elastic media, and propose elastic RWI with variable density. We apply Born modeling to generate the synthetic reflection data by using optimized perturbations of P- and S-wave velocities and density. The inversion for the perturbations in P- and S-wave velocities and density is similar to elastic least-squares reverse time migration (LSRTM). An incorrect initial model will lead to some misfits at the far offsets of reflections; thus, can be utilized to update the background velocity. We optimize the perturbation and background models in a nested approach. Numerical tests on the Marmousi model demonstrate that our method is able to build reasonably good background models for elastic FWI with absence of low frequencies, and it can deal with the variable density, which is needed in real cases.
Region of interest evaluation of SPECT image reconstruction methods using a realistic brain phantom
International Nuclear Information System (INIS)
Xia, Weishi; Glick, S.J.; Soares, E.J.
1996-01-01
A realistic numerical brain phantom, developed by Zubal et al, was used for a region-of-interest evaluation of the accuracy and noise variance of the following SPECT reconstruction methods: (1) Maximum-Likelihood reconstruction using the Expectation-Maximization (ML-EM) algorithm; (2) an EM algorithm using ordered-subsets (OS-EM); (3) a re-scaled block iterative EM algorithm (RBI-EM); and (4) a filtered backprojection algorithm that uses a combination of the Bellini method for attenuation compensation and an iterative spatial blurring correction method using the frequency-distance principle (FDP). The Zubal phantom was made from segmented MRI slices of the brain, so that neuro-anatomical structures are well defined and indexed. Small regions-of-interest (ROIs) from the white matter, grey matter in the center of the brain and grey matter from the peripheral area of the brain were selected for the evaluation. Photon attenuation and distance-dependent collimator blurring were modeled. Multiple independent noise realizations were generated for two different count levels. The simulation study showed that the ROI bias measured for the EM-based algorithms decreased as the iteration number increased, and that the OS-EM and RBI-EM algorithms (16 and 64 subsets were used) achieved the equivalent accuracy of the ML-EM algorithm at about the same noise variance, with much fewer number of iterations. The Bellini-FDP restoration algorithm converged fast and required less computation per iteration. The ML-EM algorithm had a slightly better ROI bias vs. variance trade-off than the other algorithms
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditio