WorldWideScience

Sample records for waveform reconstruction method

  1. Inversion method for initial tsunami waveform reconstruction

    Directory of Open Access Journals (Sweden)

    V. V. Voronin

    2014-12-01

    Full Text Available This paper deals with the application of r-solution method to recover the initial tsunami waveform in a tsunami source area by remote water-level measurements. Wave propagation is considered within the scope of a linear shallow-water theory. An ill-posed inverse problem is regularized by means of least square inversion using a truncated SVD approach. The properties of obtained solution are determined to a large extent by the properties of an inverse operator, which were numerically investigated. The method presented allows one to control instability of the numerical solution and to obtain an acceptable result in spite of ill-posedness of the problem. It is shown that the accuracy of tsunami source reconstruction strongly depends on the signal-to-noise ratio, the azimuthal coverage of recording stations with respect to the source area and bathymetric features along the wave path. The numerical experiments were carried out with synthetic data and various computational domains including a real bathymetry. The method proposed allows us to make a preliminary prediction of the efficiency of the inversion with a given set of the recording stations and to find out the most informative part of the existing observation system. This essential property of the method can prove to be useful in designing a monitoring system for tsunamis.

  2. Performance enhancement of the single-phase series active filter by employing the load voltage waveform reconstruction and line current sampling delay reduction methods

    DEFF Research Database (Denmark)

    Senturk, O.S.; Hava, A.M.

    2011-01-01

    This paper proposes the waveform reconstruction method (WRM), which is utilized in the single-phase series active filter's (SAF's) control algorithm, in order to extract the load harmonic voltage component of voltage harmonic type single-phase diode rectifier loads. Employing WRM and the line...... current sampling delay reduction method, a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous-reference-frame-based methods. The analytical, simulation, and experimental studies of a 2.5 k......W single-phase SAF compensated system prove the theory....

  3. High-Performance Harmonic Isolation and Load Voltage Regulation of the Three-Phase Series Active Filter Utilizing the Waveform Reconstruction Method

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Hava, Ahmet M.

    2009-01-01

    . The SAF-compensated system utilizing WRM provides highperformance load harmonic voltage isolation and load voltage regulation at steady-state and during transients compared to the system utilizing the synchronous reference-frame-based signal decomposition. In addition, reducing the line current sampling......This paper develops a waveform reconstruction method (WRM) for high accuracy and bandwidth signal decomposition of voltage-harmonic-type three-phase diode rectifier load voltage into its harmonic and fundamental components, which are utilized in the series active filter (SAF) control algorithms...

  4. Waveform Inversion with Source Encoding for Breast Sound Speed Reconstruction in Ultrasound Computed Tomography

    CERN Document Server

    Wang, Kun; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-01-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer-simulation and experimental phantom studies are conduc...

  5. High Performance Harmonic Isolation By Means of The Single-phase Series Active Filter Employing The Waveform Reconstruction Method

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Hava, Ahmet M.

    2009-01-01

    current sampling delay reduction method (SDRM), a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous reference frame based methods. The analytical, simulation, and experimental studies of a 2...

  6. Reconstructing core-collapse supernovae waveforms with advanced era interferometers

    Science.gov (United States)

    McIver, Jessica; LIGO Scientific Collaboration

    2015-04-01

    Among of the wide range of potentially interesting astrophysical sources for Advanced LIGO and Advanced Virgo are galactic core-collapse supernovae. Although detectable core-collapse supernovae have a low expected rate (a few per century, or less) these signals would yield a wealth of new physics in the form of many messengers. Of particular interest is the insight into the explosion mechanism driving core-collapse supernovae that can be gleaned from the reconstructed gravitational wave signal. A well-reconstructed waveform will allow us to assess the likelihood of different explosion models, perform model selection, and potentially map unexpected features to new physics. This talk will present a study evaluating the current performance of the reconstruction of core-collapse supernovae gravitational wave signals. We used simulated waveforms modeled after different explosion mechanisms that we first injected into fake strain data re-colored to the expected Advanced LIGO/Virgo noise curves and then reconstructed using the pipelines Coherent Waveburst 2G and BayesWave. We will discuss the impact of these results on our ability to accurately reconstruct core-collapse supernovae signals, and by extension, other potential astrophysical generators of rich, complex waveforms.

  7. Waveform reconstruction for an ultrasonic fiber Bragg grating sensor demodulated by an erbium fiber laser.

    Science.gov (United States)

    Wu, Qi; Okabe, Yoji

    2015-02-01

    Fiber Bragg grating (FBG) demodulated by an erbium fiber laser (EFL) has been used for ultrasonic detection recently. However, due to the inherent relaxation oscillation (RO) of the EFL, the detected ultrasonic signals have large deformations, especially in the low-frequency range. We proposed a novel data processing method to reconstruct an actual ultrasonic waveform. The noise spectrum was smoothed first; the actual ultrasonic spectrum was then obtained by deconvolution in order to mitigate the influence of the RO of the EFL. We proved by experiment that this waveform reconstruction method has high precision, and demonstrated that the FBG sensor demodulated by the EFL will have large practical applications in nondestructive testing.

  8. Waveform Inversion with Source Encoding for Breast Sound Speed Reconstruction in Ultrasound Computed Tomography

    Science.gov (United States)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2016-01-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer-simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden. PMID:25768816

  9. Waveform inversion with source encoding for breast sound speed reconstruction in ultrasound computed tomography.

    Science.gov (United States)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  10. An iterative method for 2D inverse scattering problems by alternating reconstruction of medium properties and wavefields: theory and application to the inversion of elastic waveforms

    Science.gov (United States)

    Rizzuti, G.; Gisolf, A.

    2017-03-01

    We study a reconstruction algorithm for the general inverse scattering problem based on the estimate of not only medium properties, as in more conventional approaches, but also wavefields propagating inside the computational domain. This extended set of unknowns is justified as a way to prevent local minimum stagnation, which is a common issue for standard methods. At each iteration of the algorithm, (i) the model parameters are obtained by solution of a convex problem, formulated from a special bilinear relationship of the data with respect to properties and wavefields (where the wavefield is kept fixed), and (ii) a better estimate of the wavefield is calculated, based on the previously reconstructed properties. The resulting scheme is computationally convenient since step (i) can greatly benefit from parallelization and the wavefield update (ii) requires modeling only in the known background model, which can be sped up considerably by factorization-based direct methods. The inversion method is successfully tested on synthetic elastic datasets.

  11. A New Method of Designing Waveform Codebook

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The codebook search takes much operation quantity in CELP coder. The paper puts forward a new method redesigning the waveform codebook known, and lists the experimental data. It has been proved that the operation complexity and transmission bit rate were decreased by using the new codebook, and the synthesis speech quality was high.

  12. Photonic arbitrary waveform generator based on Taylor synthesis method

    DEFF Research Database (Denmark)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji

    2016-01-01

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...

  13. Photonic arbitrary waveform generator based on Taylor synthesis method.

    Science.gov (United States)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji; Yan, Siqi; Wang, Xu; Zhang, Xinliang

    2016-10-17

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large dispersion, which are difficult to fabricate on chip. Our scheme is compact and capable for integration with electronics.

  14. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes.

    Science.gov (United States)

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-04-22

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable.

  15. Seismic Waveform Inversion Using the Finite-Difference Contrast Source Inversion Method

    OpenAIRE

    Bo Han; Qinglong He; Yong Chen; Yixin Dou

    2014-01-01

    This paper extends the finite-difference contrast source inversion method to reconstruct the mass density for two-dimensional elastic wave inversion in the framework of the full-waveform inversion. The contrast source inversion method is a nonlinear iterative method that alternatively reconstructs contrast sources and contrast function. One of the most outstanding advantages of this inversion method is the highly computational efficiency, since it does not need to simulate a fu...

  16. A new earthquake location method based on the waveform inversion

    CERN Document Server

    Wu, Hao; Huang, Xueyuan; Yang, Dinghui

    2016-01-01

    In this paper, a new earthquake location method based on the waveform inversion is proposed. As is known to all, the waveform misfit function is very sensitive to the phase shift between the synthetic waveform signal and the real waveform signal. Thus, the convergence domain of the conventional waveform based earthquake location methods is very small. In present study, by introducing and solving a simple sub-optimization problem, we greatly expand the convergence domain of the waveform based earthquake location method. According to a large number of numerical experiments, the new method expands the range of convergence by several tens of times. This allows us to locate the earthquake accurately even from some relatively bad initial values.

  17. Tsunami waveform inversion by adjoint methods

    Science.gov (United States)

    Pires, Carlos; Miranda, Pedro M. A.

    2001-09-01

    An adjoint method for tsunami waveform inversion is proposed, as an alternative to the technique based on Green's functions of the linear long wave model. The method has the advantage of being able to use the nonlinear shallow water equations, or other appropriate equation sets, and to optimize an initial state given as a linear or nonlinear function of any set of free parameters. This last facility is used to perform explicit optimization of the focal fault parameters, characterizing the initial sea surface displacement of tsunamigenic earthquakes. The proposed methodology is validated with experiments using synthetic data, showing the possibility of recovering all relevant details of a tsunami source from tide gauge observations, providing that the adjoint method is constrained in an appropriate manner. It is found, as in other methods, that the inversion skill of tsunami sources increases with the azimuthal and temporal coverage of assimilated tide gauge stations; furthermore, it is shown that the eigenvalue analysis of the Hessian matrix of the cost function provides a consistent and useful methodology to choose the subset of independent parameters that can be inverted with a given dataset of observations and to evaluate the error of the inversion process. The method is also applied to real tide gauge series, from the tsunami of the February 28, 1969, Gorringe Bank earthquake, suggesting some reasonable changes to the assumed focal parameters of that event. It is suggested that the method proposed may be able to deal with transient tsunami sources such as those generated by submarine landslides.

  18. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    , the amplitude spectrum of the transmitted waveform can be optimized, such that most of the energy is transmitted where the transducer has large amplification. To test the design method, a waveform was designed for a BK8804 linear array transducer. The resulting nonlinear frequency modulated waveform...... for the linear frequency modulated signal) were tested for both waveforms in simulation with respect to the Doppler frequency shift occurring when probing moving objects. It was concluded that the Doppler effect of moving targets does not significantly degrade the filtered output. Finally, in vivo measurements...

  19. Inferring the physical properties of gravitational wave sources from multi-wavelet waveform reconstructions

    Science.gov (United States)

    Littenberg, Tyson; LIGO Scientific Collaboration

    2016-03-01

    The BayesWave burst detection and characterization algorithm was used during the first Advanced LIGO observing run as a follow-up analysis to candidate transient gravitational wave events. Among the BayesWave data products are robust reconstructed waveforms and probability density functions for metrics such as duration, bandwidth, etc. used to characterize the waveforms. We will demonstrate how the waveform metrics can be used to infer the astrophysical nature of a gravitational wave source, and present the status of BayesWave studies from the first advanced LIGO observing run.

  20. Adaptive Prony method for waveform distortion detection in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Bracale, A.; Carpinelli, G. [Electrical Engineering Department, University of Napoli, Via Claudio 21, 80125 Napoli (Italy); Caramia, P. [Industrial Engineering Department, University of Cassino (Italy)

    2007-06-15

    IEC Standards characterize the waveform distortions in power systems with the amplitudes of harmonic and interharmonic groupings (subgroups and groups) calculated by using the waveform spectral components obtained with a 5 Hz frequency resolution DFT. In some cases the power system waveforms are characterized by means of spectral signal components that the DFT with 5 Hz frequency resolution is unable to capture with sufficient accuracy. In this paper a new Prony method is proposed to calculate the harmonic and interharmonic subgroups. This method is based on an adaptive technique that acts with the aim of minimizing the mean square relative error of signal estimation. (author)

  1. Signal quality quantification and waveform reconstruction of arterial blood pressure recordings.

    Science.gov (United States)

    Fanelli, A; Heldt, T

    2014-01-01

    Arterial blood pressure (ABP) is an important vital sign of the cardiovascular system. As with other physiological signals, its measurement can be corrupted by different sources of noise, interference, and artifact. Here, we present an algorithm for the quantification of signal quality and for the reconstruction of the ABP waveform in noise-corrupted segments of the measurement. The algorithm quantifies the quality of the ABP signal on a beat-by-beat basis by computing the normalized mean of successive differences of the ABP amplitude over each beat. In segments of poor signal quality, the ABP wavelets are then reconstructed on the basis of the expected cycle duration and envelope information derived from neighboring ABP wavelet segments. The algorithm was tested on two datasets of ABP waveform signals containing both invasive radial artery ABP and noninvasive ABP waveforms. Our results show that the approach is efficient in identifying the noisy segments (accuracy, sensitivity and specificity over 95%) and reliable in reconstructing beats that were artificially corrupted.

  2. A method of waveform design based on mutual information

    Institute of Scientific and Technical Information of China (English)

    Bo JIU; Hongwei LIU; Liya LI; Shunjun WU

    2009-01-01

    A novel method called the general waterfilling, which is suitable when clutter is not negligible, is proposed to solve the waveform design problem of broadband radar for the recognition of multiple extended targets. The uncertainty of the target's radar signatures is decreased via maximizing the mutual information between a random extended target and the received signal. Then, the general water-filling method is employed to the waveform design problem for multiple extended targets identification to increase the separability of multiple targets. Experimental results evaluated the efficiency of the proposed method. Compared to chirp signal and water-filling signal,our method improves the classification rates and even performs better at low signal-to-interference-plus-noise ratio (SINR).

  3. Analysis of speech waveform quantization methods

    Directory of Open Access Journals (Sweden)

    Tadić Predrag R.

    2008-01-01

    Full Text Available Digitalization, consisting of sampling and quantization, is the first step in any digital signal processing algorithm. In most cases, the quantization is uniform. However, having knowledge of certain stochastic attributes of the signal (namely, the probability density function, or pdf, quantization can be made more efficient, in the sense of achieving a greater signal to quantization noise ratio. This means that narrower channel bandwidths are required for transmitting a signal of the same quality. Alternatively, if signal storage is of interest, rather than transmission, considerable savings in memory space can be made. This paper presents several available methods for speech signal pdf estimation, and quantizer optimization in the sense of minimizing the quantization error power.

  4. Assessment of waveform control method for mitigation of low-frequency current ripple

    OpenAIRE

    Zhu, GR; Wang, HR; Xiao, CY; Kang, Y.; Tan, SC

    2013-01-01

    Waveform control method can mitigate such a low-frequency ripple current being drawn from the DC distribution while the DC distribution system delivers AC power to the load through a differential inverter. Assessment on the waveform control method and comparative study between with and without waveform control method are proposed in this paper1. Experimental results are provided to explain the operation and showcase the performance between with and without the waveform control method. Results...

  5. Observation of Reconstructable Radio Waveforms from Solar Flares with the Askaryan Radio Array (ARA)

    Science.gov (United States)

    Clark, Brian; Askaryan Radio Array Collaboration

    2017-01-01

    The Askaryan Radio Array (ARA) is an ultra-high energy (>1017 eV) neutrino detector in phased construction at the South Pole. The full detector will consist of 37 autonomous stations of antennas which search for the radio pulses produced by neutrino interactions in the Antarctic ice. Three of the proposed detectors have been installed at up to 200m depth, with an additional two slated for deployment in Austral summer 2017. A prototype of the detector was deployed in January 2011, in time to serendipitously observe the relatively active solar month of February. In this talk, we will present preliminary results from an analysis of radio waveforms associated with an X-class solar flare observed in this prototype station. These are the first reconstructable events of natural origin seen by ARA, and could potentially be a powerful calibration source for the array.

  6. Parallel full-waveform inversion in the frequency domain by the Gauss-Newton method

    Science.gov (United States)

    Zhang, Wensheng; Zhuang, Yuan

    2016-06-01

    In this paper, we investigate the full-waveform inversion in the frequency domain. We first test the inversion ability of three numerical optimization methods, i.e., the steepest-descent method, the Newton-CG method and the Gauss- Newton method, for a simple model. The results show that the Gauss-Newton method performs well and efficiently. Then numerical computations for a benchmark model named Marmousi model by the Gauss-Newton method are implemented. Parallel algorithm based on message passing interface (MPI) is applied as the inversion is a typical large-scale computational problem. Numerical computations show that the Gauss-Newton method has good ability to reconstruct the complex model.

  7. An Identification Method of Magnetizing Inrush Current Phenomena by Voltage Waveform

    Science.gov (United States)

    Naitoh, Tadashi; Takeda, Keiki; Toyama, Atsushi; Maeda, Tatsuhiko

    In this paper, the authors propose a new identification method of the magnetizing inrush current phenomena. In general, the identification is done using with current waveform. However, the saturation of current transformer can't give waveform. Therefore, the authors introduce the identification method using with voltage waveform, in which the saturation of voltage transformer doesn't happen. And then, applying the Aitken's Δ2-process, it is showed that the new identification method gives the exact saturation on/off time.

  8. Method and apparatus for resonant frequency waveform modulation

    Science.gov (United States)

    Taubman, Matthew S [Richland, WA

    2011-06-07

    A resonant modulator device and process are described that provide enhanced resonant frequency waveforms to electrical devices including, e.g., laser devices. Faster, larger, and more complex modulation waveforms are obtained than can be obtained by use of conventional current controllers alone.

  9. Full Waveform Inversion Using Oriented Time Migration Method

    KAUST Repository

    Zhang, Zhendong

    2016-04-12

    Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I

  10. A Denoising Method for LiDAR Full-Waveform Data

    Directory of Open Access Journals (Sweden)

    Xudong Lai

    2015-01-01

    Full Text Available Decomposition of LiDAR full-waveform data can not only enhance the density and positioning accuracy of a point cloud, but also provide other useful parameters, such as pulse width, peak amplitude, and peak position which are important information for subsequent processing. Full-waveform data usually contain some random noises. Traditional filtering algorithms always cause distortion in the waveform. λ/μ filtering algorithm is based on Mean Shift method. It can smooth the signal iteratively and will not cause any distortion in the waveform. In this paper, an improved λ/μ filtering algorithm is proposed, and several experiments on both simulated waveform data and real waveform data are implemented to prove the effectiveness of the proposed algorithm.

  11. Modern methods of image reconstruction.

    Science.gov (United States)

    Puetter, R. C.

    The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.

  12. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    Science.gov (United States)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the

  13. A signal denoising method for full-waveform LiDAR data

    OpenAIRE

    M.Azadbakht; Fraser, C.S.; Zhang, C.; Leach, J

    2013-01-01

    The lack of noise reduction methods resistant to waveform distortion can hamper correct and accurate decomposition in the processing of full-waveform LiDAR data. This paper evaluates a time-domain method for smoothing and reducing the noise level in such data. The Savitzky-Golay (S-G) approach approximates and smooths data by taking advantage of fitting a polynomial of degree d, using local least-squares. As a consequence of the integration of this method with the Singular Value Deco...

  14. Waveform control method for mitigating harmonics of inverter systems with nonlinear load

    DEFF Research Database (Denmark)

    Wang, Haoran; Zhu, Guorong; Fu, Xiaobin;

    2015-01-01

    DC power systems connecting to single-phase DC/AC inverters with nonlinear loads will have their DC sources being injected with AC ripple currents containing a low-frequency component at twice the output voltage frequency of the inverter and also other current harmonics. Such a current may create...... instability in the DC power system, lower its efficiency, and shorten the lifetime of the DC source. This paper presents a general waveform control method that can mitigate the injection of the low-frequency ripple current by the single-phase DC/AC inverter into the DC source. It also discusses the inhibiting...... ability of the waveform control method on other coexisting harmonics, while the DC source delivers AC power to a nonlinear load. With the application of the waveform control, the average DC output power is supplied by the DC source, while the other harmonics pulsation power can be confined to the AC side...

  15. A high success rate full-waveform lidar echo decomposition method

    Science.gov (United States)

    Xu, Lijun; Li, Duan; Li, Xiaolu

    2016-01-01

    A full-waveform Light detection and ranging (LiDAR) echo decomposition method is proposed in this paper. In this method, the peak points are used to detect the separated echo components, while the inflection points are combined with corresponding peak points to detect the overlapping echo components. The detected echo components are then sorted according to their energies in a descending order. The sorted echo components are one by one added into the decomposition model according to their orders. For each addition, the parameters of all echo components already added into the decomposition model are iteratively renewed. After renewing, the amplitudes and full width at half maximums of the echo components are compared with pre-set thresholds to determine and remove the false echo components. Both simulation and experiment were carried out to evaluate the proposed method. In simulation, 4000 full-waveform echoes with different numbers and parameters of echo components were generated and decomposed using the proposed and three other commonly used methods. Results show that the proposed method is of the highest success rate, 91.43%. In experiment, 9549 Geoscience Laser Altimeter System (GLAS) echoes for Shennongjia forest district in south China were employed as test echoes. The test echoes were first decomposed using the four methods and the decomposition results were also compared with those provided by the National Snow and Ice Data Center. Comparison results show that the determination coefficient ({{R}2} ) of the proposed method is of the largest mean, 0.6838, and the smallest standard deviation, 0.3588, and the distribution of the number of the echo components decomposed from the GLAS echoes is the most satisfied with the situation of full-waveform echoes from the forest area, implying that the superposition of the echo components decomposed from a full-waveform echo by using the proposed method can best approximate the full-waveform echo.

  16. Averaging methods for extracting representative waveforms from motor unit action potential trains.

    Science.gov (United States)

    Malanda, Armando; Navallas, Javier; Rodriguez-Falces, Javier; Rodriguez-Carreño, Ignacio; Gila, Luis

    2015-08-01

    In the context of quantitative electromyography (EMG), it is of major interest to obtain a waveform that faithfully represents the set of potentials that constitute a motor unit action potential (MUAP) train. From this waveform, various parameters can be determined in order to characterize the MUAP for diagnostic analysis. The aim of this work was to conduct a thorough, in-depth review, evaluation and comparison of state-of-the-art methods for composing waveforms representative of MUAP trains. We evaluated nine averaging methods: Ensemble (EA), Median (MA), Weighted (WA), Five-closest (FCA), MultiMUP (MMA), Split-sweep median (SSMA), Sorted (SA), Trimmed (TA) and Robust (RA) in terms of three general-purpose signal processing figures of merit (SPMF) and seven clinically-used MUAP waveform parameters (MWP). The convergence rate of the methods was assessed as the number of potentials per MUAP train (NPM) required to reach a level of performance that was not significantly improved by increasing this number. Test material comprised 78 MUAP trains obtained from the tibialis anterioris of seven healthy subjects. Error measurements related to all SPMF and MWP parameters except MUAP amplitude descended asymptotically with increasing NPM for all methods. MUAP amplitude showed a consistent bias (around 4% for EA and SA and 1-2% for the rest). MA, TA and SSMA had the lowest SPMF and MWP error figures. Therefore, these methods most accurately preserve and represent MUAP physiological information of utility in clinical medical practice. The other methods, particularly WA, performed noticeably worse. Convergence rate was similar for all methods, with NPM values averaged among the nine methods, which ranged from 10 to 40, depending on the waveform parameter evaluated.

  17. A New Process Monitoring Method Based on Waveform Signal by Using Recurrence Plot

    OpenAIRE

    Cheng Zhou; Weidong Zhang

    2015-01-01

    Process monitoring is an important research problem in numerous areas. This paper proposes a novel process monitoring scheme by integrating the recurrence plot (RP) method and the control chart technique. Recently, the RP method has emerged as an effective tool to analyze waveform signals. However, unlike the existing RP methods that employ recurrence quantification analysis (RQA) to quantify the recurrence plot by a few summary statistics; we propose new concepts of template recurrence plots ...

  18. A reshaped excitation regenerating and mapping method for waveform correction in Lamb waves dispersion compensation

    Science.gov (United States)

    Luo, Zhi; Zeng, Liang; Lin, Jing; Hua, Jiadong

    2017-02-01

    Dispersion effect of Lamb wave will cause wave-packets to spread out in space and time, making received signals hard to be interpreted. Though the conventional dispersion compensation method can restrain dispersion effect, waveform deformation still remains in the compensated results. To eliminate dispersion effect completely, a reshaped excitation dispersion compensation method is proposed in this paper. The method compensates the dispersed signal to the same shape as the original excitation by generating a reshaped excitation and then mapping the received signal from time domain to distance domain. Simulations and experiments are conducted for the validation of the waveform correction of the reshaped excitation dispersion compensation method. Applied in the traditional delay-and-sum algorithm, the new dispersion compensation method can effectively enhance the resolution of the damage imaging.

  19. Some advanced parametric methods for assessing waveform distortion in a smart grid with renewable generation

    Science.gov (United States)

    Alfieri, Luisa

    2015-12-01

    Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.

  20. Multi-frequency accelerating strategy for the contrast source inversion method of ultrasound waveform tomography using pulse data

    Science.gov (United States)

    Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu

    2017-03-01

    In this work, we construct a multi-frequency accelerating strategy for the contrast source inversion (CSI) method using pulse data in the time domain. CSI is a frequency-domain inversion method for ultrasound waveform tomography that does not require the forward solver through the process of reconstruction. Several prior researches show that the CSI method has a good performance of convergence and accuracy in the low-center-frequency situation. In contrast, utilizing the high-center-frequency data leads to a high-resolution reconstruction but slow convergence on large numbers of grid. Our objective is to take full advantage of all low frequency components from pulse data with the high-center-frequency data measured by the diagnostic device. First we process the raw data in the frequency domain. Then multi-frequency accelerating strategy helps restart CSI in the current frequency using the last iteration result obtained from the lower frequency component. The merit of multi- frequency accelerating strategy is that computational burden decreases at the first few iterations. Because the low frequency component of dataset computes on the coarse grid with assuming a fixed number of points per wavelength. In the numerical test, the pulse data were generated by the K-wave simulator and have been processed to meet the computation of the CSI method. We investigate the performance of the multi-frequency and single-frequency reconstructions and conclude that the multi-frequency accelerating strategy significantly enhances the quality of the reconstructed image and simultaneously reduces the average computational time for any iteration step.

  1. Full waveform inversion using oriented time-domain imaging method for vertical transverse isotropic media

    KAUST Repository

    Zhang, Zhendong

    2017-07-11

    Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate update direction leading the inversion to converge what we refer to as local minima of the objective function. In our approach, we consider mild lateral variation in the model, and thus, use a gradient given by the oriented time-domain imaging method. Specifically, we apply the oriented time-domain imaging on the data residual to obtain the geometrical features of the velocity perturbation. After updating the model in the time domain, we convert the perturbation from the time domain to depth using the average velocity. Considering density is constant, we can expand the conventional 1D impedance inversion method to 2D or 3D velocity inversion within the process of full waveform inversion. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reection response. To eliminate the cross-talk artifacts between different parameters, we utilize what we consider being an optimal parametrization for this step. To do so, we extend the prestack time-domain migration image in incident angle dimension to incorporate angular dependence needed by the multiparameter inversion. For simple models, this approach provides an efficient and stable way to do full waveform inversion or modified seismic inversion and makes the anisotropic inversion more practicable. The proposed method still needs kinematically accurate initial models since it only recovers the high-wavenumber part as conventional full waveform inversion method does. Results on synthetic data of isotropic and anisotropic cases illustrate the benefits and limitations of this method.

  2. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    Science.gov (United States)

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms.

  3. Distributed Reconstruction via Alternating Direction Method

    Directory of Open Access Journals (Sweden)

    Linyuan Wang

    2013-01-01

    Full Text Available With the development of compressive sensing theory, image reconstruction from few-view projections has received considerable research attentions in the field of computed tomography (CT. Total-variation- (TV- based CT image reconstruction has been shown to be experimentally capable of producing accurate reconstructions from sparse-view data. In this study, a distributed reconstruction algorithm based on TV minimization has been developed. This algorithm is very simple as it uses the alternating direction method. The proposed method can accelerate the alternating direction total variation minimization (ADTVM algorithm without losing accuracy.

  4. A signal denoising method for full-waveform LiDAR data

    Science.gov (United States)

    Azadbakht, M.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    The lack of noise reduction methods resistant to waveform distortion can hamper correct and accurate decomposition in the processing of full-waveform LiDAR data. This paper evaluates a time-domain method for smoothing and reducing the noise level in such data. The Savitzky-Golay (S-G) approach approximates and smooths data by taking advantage of fitting a polynomial of degree d, using local least-squares. As a consequence of the integration of this method with the Singular Value Decomposition (SVD) approach, and applying this filter on the singular vectors of the SVD, satisfactory denoising results can be obtained. The results of this SVD-based S-G approach have been evaluated using two different LiDAR datasets and also compared with those of other popular methods in terms of the degree of preservation of the moments of the signal and closeness to the noisy signal. The results indicate that the SVD-based S-G approach has superior performance in denoising full-waveform LiDAR data.

  5. Method for position emission mammography image reconstruction

    Science.gov (United States)

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  6. System and Method for Generating a Frequency Modulated Linear Laser Waveform

    Science.gov (United States)

    Pierrottet, Diego F. (Inventor); Petway, Larry B. (Inventor); Amzajerdian, Farzin (Inventor); Barnes, Bruce W. (Inventor); Lockard, George E. (Inventor); Hines, Glenn D. (Inventor)

    2017-01-01

    A system for generating a frequency modulated linear laser waveform includes a single frequency laser generator to produce a laser output signal. An electro-optical modulator modulates the frequency of the laser output signal to define a linear triangular waveform. An optical circulator passes the linear triangular waveform to a band-pass optical filter to filter out harmonic frequencies created in the waveform during modulation of the laser output signal, to define a pure filtered modulated waveform having a very narrow bandwidth. The optical circulator receives the pure filtered modulated laser waveform and transmits the modulated laser waveform to a target.

  7. System and Method for Generating a Frequency Modulated Linear Laser Waveform

    Science.gov (United States)

    Pierrottet, Diego F. (Inventor); Petway, Larry B. (Inventor); Amzajerdian, Farzin (Inventor); Barnes, Bruce W. (Inventor); Lockard, George E. (Inventor); Hines, Glenn D. (Inventor)

    2014-01-01

    A system for generating a frequency modulated linear laser waveform includes a single frequency laser generator to produce a laser output signal. An electro-optical modulator modulates the frequency of the laser output signal to define a linear triangular waveform. An optical circulator passes the linear triangular waveform to a band-pass optical filter to filter out harmonic frequencies created in the waveform during modulation of the laser output signal, to define a pure filtered modulated waveform having a very narrow bandwidth. The optical circulator receives the pure filtered modulated laser waveform and transmits the modulated laser waveform to a target.

  8. Assessing the blood pressure waveform of the carotid artery using an ultrasound image processing method

    Energy Technology Data Exchange (ETDEWEB)

    Soleimani, Effat; Mokhtari-Dizaji, Manijhe [Dept. of Medical Physics, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Fatouraee, Nasser [Dept. of Medical Engineering, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Saben, Hazhir [Dept. Radiology, Imaging Center of Imam Khomaini Hospital, Tehran Medical Sciences University, Tehran (Iran, Islamic Republic of)

    2017-04-15

    The aim of this study was to introduce and implement a noninvasive method to derive the carotid artery pressure waveform directly by processing diagnostic sonograms of the carotid artery. Ultrasound image sequences of 20 healthy male subjects (age, 36±9 years) were recorded during three cardiac cycles. The internal diameter and blood velocity waveforms were extracted from consecutive sonograms over the cardiac cycles by using custom analysis programs written in MATLAB. Finally, the application of a mathematical equation resulted in time changes of the arterial pressure. The resulting pressures were calibrated using the mean and the diastolic pressure of the radial artery. A good correlation was found between the mean carotid blood pressure obtained from the ultrasound image processing and the mean radial blood pressure obtained using a standard digital sphygmomanometer (R=0.91). The mean absolute difference between the carotid calibrated pulse pressures and those measured clinically was -1.333±6.548 mm Hg. The results of this study suggest that consecutive sonograms of the carotid artery can be used for estimating a blood pressure waveform. We believe that our results promote a noninvasive technique for clinical applications that overcomes the reproducibility problems of common carotid artery tonometry with technical and anatomical causes.

  9. Picking vs Waveform based detection and location methods for induced seismicity monitoring

    Science.gov (United States)

    Grigoli, Francesco; Boese, Maren; Scarabello, Luca; Diehl, Tobias; Weber, Bernd; Wiemer, Stefan; Clinton, John F.

    2017-04-01

    Microseismic monitoring is a common operation in various industrial activities related to geo-resouces, such as oil and gas and mining operations or geothermal energy exploitation. In microseismic monitoring we generally deal with large datasets from dense monitoring networks that require robust automated analysis procedures. The seismic sequences being monitored are often characterized by very many events with short inter-event times that can even provide overlapped seismic signatures. In these situations, traditional approaches that identify seismic events using dense seismic networks based on detections, phase identification and event association can fail, leading to missed detections and/or reduced location resolution. In recent years, to improve the quality of automated catalogues, various waveform-based methods for the detection and location of microseismicity have been proposed. These methods exploit the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. Although this family of methods have been applied to different induced seismicity datasets, an extensive comparison with sophisticated pick-based detection and location methods is still lacking. We aim here to perform a systematic comparison in term of performance using the waveform-based method LOKI and the pick-based detection and location methods (SCAUTOLOC and SCANLOC) implemented within the SeisComP3 software package. SCANLOC is a new detection and location method specifically designed for seismic monitoring at local scale. Although recent applications have proved an extensive test with induced seismicity datasets have been not yet performed. This method is based on a cluster search algorithm to associate detections to one or many potential earthquake sources. On the other hand, SCAUTOLOC is more a "conventional" method and is the basic tool for seismic event detection and location in SeisComp3. This approach was specifically designed for

  10. A simple method for medial canthal reconstruction

    NARCIS (Netherlands)

    Wittkampf, ARM; Mourits, MP

    2001-01-01

    A simple method for medical canthal wiring reconstruction with the help of a homolaterally fixed osteosynthesis plate and a metal wire is presented. This avoids transnasal wiring and gives superior control when correcting the position of the lacerated Omedial canthus.

  11. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    Science.gov (United States)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  12. Time-parallel iterative methods for parabolic PDES: Multigrid waveform relaxation and time-parallel multigrid

    Energy Technology Data Exchange (ETDEWEB)

    Vandewalle, S. [Caltech, Pasadena, CA (United States)

    1994-12-31

    Time-stepping methods for parabolic partial differential equations are essentially sequential. This prohibits the use of massively parallel computers unless the problem on each time-level is very large. This observation has led to the development of algorithms that operate on more than one time-level simultaneously; that is to say, on grids extending in space and in time. The so-called parabolic multigrid methods solve the time-dependent parabolic PDE as if it were a stationary PDE discretized on a space-time grid. The author has investigated the use of multigrid waveform relaxation, an algorithm developed by Lubich and Ostermann. The algorithm is based on a multigrid acceleration of waveform relaxation, a highly concurrent technique for solving large systems of ordinary differential equations. Another method of this class is the time-parallel multigrid method. This method was developed by Hackbusch and was recently subject of further study by Horton. It extends the elliptic multigrid idea to the set of equations that is derived by discretizing a parabolic problem in space and in time.

  13. A New Process Monitoring Method Based on Waveform Signal by Using Recurrence Plot

    Directory of Open Access Journals (Sweden)

    Cheng Zhou

    2015-09-01

    Full Text Available Process monitoring is an important research problem in numerous areas. This paper proposes a novel process monitoring scheme by integrating the recurrence plot (RP method and the control chart technique. Recently, the RP method has emerged as an effective tool to analyze waveform signals. However, unlike the existing RP methods that employ recurrence quantification analysis (RQA to quantify the recurrence plot by a few summary statistics; we propose new concepts of template recurrence plots and continuous-scale recurrence plots to characterize the waveform signals. A new feature extraction method is developed based on continuous-scale recurrence plot. Then, a monitoring statistic based on the top-  approach is constructed from the continuous-scale recurrence plot. Finally, a bootstrap control chart is built to detect the signal changes based on the constructed monitoring statistics. The comprehensive simulation studies show that the proposed monitoring scheme outperforms other RQA-based control charts. In addition, a real case study of progressive stamping processes is implemented to further evaluate the performance of the proposed scheme for process monitoring.

  14. Strike-slip faults imaging from galleries with seismic waveform imaging methods

    Science.gov (United States)

    Bretaudeau, F.; Gélis, C.; Leparoux, D.; Cabrera, J.; Côte, P.

    2011-12-01

    Deep argillaceous formations are potential host media for radioactive waste due to their physical properties such as low intrinsic permeability and radionuclide retention (Boisson et al 2001). The experimental station of Tournemire is composed of an old tunnel excavated in 1885 in a 250m thick Toarcien argilitte layer, and of several galleries excavated more recently in directions perpendicular and parallel to the tunnel. This station is operated by the French Institute for Radiological protection and Nuclear Safety (IRSN) in order to expertise possible projects of radioactive waste disposal in a geological clay formation. The presence of secondary strike-slip faults in argillaceous formations must be well assessed since they could change any rock properties such as permeability. The ones with small vertical offsets as observed in the station cannot be seen from the surface, indeed we investigate on new approaches to image them directly from the underground works. We investigate here on the potential of new imaging methods that take advantage of the full seismic waveforms in order to optimise the imaging performances: Full Waveform Inversion (FWI) and Reverse Time Migration (RTM). We try to assess the capacities and limits of those methods in this specific context, and to determine the optimum acquisition and processing parameters. The subvertical fault in the nearly homogeneous subhorizontal structure of the clay layer allows us to consider a 2D imaging problem with no anisotropy where the fault is surrounded by three galleries. The waveform inversion strategy used is based on the frequency domain formulation proposed by Pratt et al. (1990). Non linearity is mitigated by introducing sequentially information from 50Hz to 1000Hz and starting from an homogeneous medium as initial model. Preliminary tests on synthetic data (fig. 1) show the ability of FWI to quantitatively image the fault zone and illustrate the impact of the illumniation configuration. RTM suceeds to

  15. Application of asymptotic waveform approximation technique to hybrid FE/BI method for 3D scattering

    Institute of Scientific and Technical Information of China (English)

    PENG Zhen; SHENG XinQing

    2007-01-01

    The asymptotic waveform evaluation (AWE) technique is a rational function approximation method in computational mathematics, which is used in many applications in computational electromagnetics. In this paper, the performance of the AWE technique in conjunction with hybrid finite element/boundary integral (FE/BI) method is firstly investigated. The formulation of the AWE applied in hybrid FE/BI method is given in detail. The characteristic implementation of the application of the AWE to the hybrid FE/BI method is discussed. Numerical results demonstrate that the AWE technique can greatly speed up the hybrid FE/BI method to acquire wide-band and wide-angle backscatter radar-cross-section (RCS) by complex targets.

  16. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-09-06

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  17. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    Science.gov (United States)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  18. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    CERN Document Server

    Ritter, P; Spallicci, A; Cordier, S

    2015-01-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  19. A linearly approximated iterative Gaussian decomposition method for waveform LiDAR processing

    Science.gov (United States)

    Mountrakis, Giorgos; Li, Yuguang

    2017-07-01

    Full-waveform LiDAR (FWL) decomposition results often act as the basis for key LiDAR-derived products, for example canopy height, biomass and carbon pool estimation, leaf area index calculation and under canopy detection. To date, the prevailing method for FWL product creation is the Gaussian Decomposition (GD) based on a non-linear Levenberg-Marquardt (LM) optimization for Gaussian node parameter estimation. GD follows a ;greedy; approach that may leave weak nodes undetected, merge multiple nodes into one or separate a noisy single node into multiple ones. In this manuscript, we propose an alternative decomposition method called Linearly Approximated Iterative Gaussian Decomposition (LAIGD method). The novelty of the LAIGD method is that it follows a multi-step ;slow-and-steady; iterative structure, where new Gaussian nodes are quickly discovered and adjusted using a linear fitting technique before they are forwarded for a non-linear optimization. Two experiments were conducted, one using real full-waveform data from NASA's land, vegetation, and ice sensor (LVIS) and another using synthetic data containing different number of nodes and degrees of overlap to assess performance in variable signal complexity. LVIS data revealed considerable improvements in RMSE (44.8% lower), RSE (56.3% lower) and rRMSE (74.3% lower) values compared to the benchmark GD method. These results were further confirmed with the synthetic data. Furthermore, the proposed multi-step method reduces execution times in half, an important consideration as there are plans for global coverage with the upcoming Global Ecosystem Dynamics Investigation LiDAR sensor on the International Space Station.

  20. Accurate Methods for Signal Processing of Distorted Waveforms in Power Systems

    Directory of Open Access Journals (Sweden)

    A. Testa

    2007-01-01

    Full Text Available A primary problem in waveform distortion assessment in power systems is to examine ways to reduce the effects of spectral leakage. In the framework of DFT approaches, line frequency synchronization techniques or algorithms to compensate for desynchronization are necessary; alternative approaches such as those based on the Prony and ESPRIT methods are not sensitive to desynchronization, but they often require significant computational burden. In this paper, the signal processing aspects of the problem are considered; different proposals by the same authors regarding DFT-, Prony-, and ESPRIT-based advanced methods are reviewed and compared in terms of their accuracy and computational efforts. The results of several numerical experiments are reported and analysed; some of them are in accordance with IEC Standards, while others use more open scenarios.

  1. Frequency-Domain Method for Determining HEMP Standard Waveform Parameters%确定高空电磁脉冲标准波形参数的频域方法

    Institute of Scientific and Technical Information of China (English)

    程引会; 马良; 李进玺; 吴伟; 赵墨; 郭景海

    2014-01-01

    通过对数值计算的高空电磁脉冲(HEMP)波形函数拟合,确定了 HEMP标准波形的函数形式。以标准电磁脉冲波形频谱的各频率分量最大值为原则,得到标准波形的幅度谱,利用希尔伯特变换,用频谱相位重建方法获得标准时域波形;最后以部分数据为例,对获得标准参数的频域方法进行了详细讨论。%The equation which defines the HEMP standard waveform was determined by fit-ting and analyzing different HEMP waveforms coming from numerical calculation . The fre-quency spectrum of standard waveform was obtained by enveloping all reasonable physical waveform spectra . According to the Hilbert transform , the standard HEMP waveform pa-rameters were carried out utilizing the signal reconstruction method based on the amplitude spectrum . In this paper , some limited physical data are presented as examples to illustrate the method in detail .

  2. Compressive full-waveform LIDAR with low-cost sensor

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2016-10-01

    Full-waveform LiDAR is a method that digitizes the complete waveform of backscattered pulses to obtain range information of multi-targets. To avoid expensive sensors in conventional full-waveform LiDAR system, a new system based on compressive sensing method is presented in this paper. The non-coherent continuous-wave laser is modulated by electro-optical modulator with pseudo-random sequences. A low-bandwidth detector and a low-bandwidth analog-digital converter are used to acquire the returned signal. OMP algorithm is employed to reconstruct the high resolution range information.

  3. Method for detection and reconstruction of gravitational wave transients with networks of advanced detectors

    Science.gov (United States)

    Klimenko, S.; Vedovato, G.; Drago, M.; Salemi, F.; Tiwari, V.; Prodi, G. A.; Lazzaro, C.; Ackley, K.; Tiwari, S.; Da Silva, C. F.; Mitselmakher, G.

    2016-02-01

    We present a method for detection and reconstruction of the gravitational wave (GW) transients with the networks of advanced detectors. Originally designed to search for transients with the initial GW detectors, it uses significantly improved algorithms, which enhance both the low-latency searches with rapid localization of GW events for the electromagnetic follow-up and high confidence detection of a broad range of the transient GW sources. In this paper, we present the analytic framework of the method. Following a short description of the core analysis algorithms, we introduce a novel approach to the reconstruction of the GW polarization from a pattern of detector responses to a GW signal. This polarization pattern is a unique signature of an arbitrary GW signal that can be measured independently from the other source parameters. The polarization measurements enable rapid reconstruction of the GW waveforms, sky localization, and helps identification of the source origin.

  4. Method for detection and reconstruction of gravitational wave transients with networks of advanced detectors

    CERN Document Server

    Klimenko, S; Drago, M; Salemi, F; Tiwari, V; Prodi, G A; Lazzaro, C; Tiwari, S; Da Silva, F; Mitselmakher, G

    2015-01-01

    We present a method for detection and reconstruction of the gravitational wave (GW) transients with the networks of advanced detectors. Originally designed to search for the transients with the initial GW detectors, it uses significantly improved algorithms, which enable both the low-latency searches with rapid localization of GW events for the electro-magnetic followup and high confidence detection of a broad range of the transient GW sources. In the paper we present the analytic framework of the method. Following a short description of the core analysis algorithms, we introduce a novel approach to the reconstruction of the GW polarization from a pattern of detector responses to a GW signal. This polarization pattern is a unique signature of an arbitrary GW signal that can be measured independent from the other source parameters. The polarization measurements enable rapid reconstruction of the GW waveforms, sky localization and helps identification of the source origin.

  5. Magnetic flux reconstruction methods for shaped tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, Chi-Wa

    1993-12-01

    The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p` and FF` functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green`s function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green`s functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising.

  6. A method for improving the computational efficiency of a Laplace-Fourier domain waveform inversion based on depth estimation

    Science.gov (United States)

    Zhang, Dong; Zhang, Xiaolei; Yuan, Jianzheng; Ke, Rui; Yang, Yan; Hu, Ying

    2016-01-01

    The Laplace-Fourier domain full waveform inversion can simultaneously restore both the long and intermediate short-wavelength information of velocity models because of its unique characteristics of complex frequencies. This approach solves the problem of conventional frequency-domain waveform inversion in which the inversion result is excessively dependent on the initial model due to the lack of low frequency information in seismic data. Nevertheless, the Laplace-Fourier domain waveform inversion requires substantial computational resources and long computation time because the inversion must be implemented on different combinations of multiple damping constants and multiple frequencies, namely, the complex frequencies, which are much more numerous than the Fourier frequencies. However, if the entire target model is computed on every complex frequency for the Laplace-Fourier domain inversion (as in the conventional frequency domain inversion), excessively redundant computation will occur. In the Laplace-Fourier domain waveform inversion, the maximum depth penetrated by the seismic wave decreases greatly due to the application of exponential damping to the seismic record, especially with use of a larger damping constant. Thus, the depth of the area effectively inverted on a complex frequency tends to be much less than the model depth. In this paper, we propose a method for quantitative estimation of the effective inversion depth in the Laplace-Fourier domain inversion based on the principle of seismic wave propagation and mathematical analysis. According to the estimated effective inversion depth, we can invert and update only the model area above the effective depth for every complex frequency without loss of accuracy in the final inversion result. Thus, redundant computation is eliminated, and the efficiency of the Laplace-Fourier domain waveform inversion can be improved. The proposed method was tested in numerical experiments. The experimental results show that

  7. Breast ultrasound computed tomography using waveform inversion with source encoding

    Science.gov (United States)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  8. Micro-seismic Imaging Using a Source Independent Waveform Inversion Method

    KAUST Repository

    Wang, Hanchen

    2016-04-18

    Micro-seismology is attracting more and more attention in the exploration seismology community. The main goal in micro-seismic imaging is to find the source location and the ignition time in order to track the fracture expansion, which will help engineers monitor the reservoirs. Conventional imaging methods work fine in this field but there are many limitations such as manual picking, incorrect migration velocity and low signal to noise ratio (S/N). In traditional surface survey imaging, full waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. Use the FWI technique, and overcomes the difficulties of manual pickings and incorrect velocity model for migration. However, the technique of waveform inversion of micro-seismic events faces its own problems. There is significant nonlinearity due to the unknown source location (space) and function (time). We have developed a source independent FWI of micro-seismic events to simultaneously invert for the source image, source function and velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. To examine the accuracy of the inverted source image and velocity model the extended image for source wavelet in z-axis is extracted. Also the angle gather is calculated to check the applicability of the migration velocity. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity in the synthetic experiments with both parts of the Marmousi and the SEG

  9. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-07-29

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.

  10. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  11. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    Science.gov (United States)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  12. Seafloor classification using echo- waveforms: A method employing hybrid neural network architecture

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Mahale, V.; DeSouza, C.; Das, P.

    This letter presents seafloor classification study results of a hybrid artificial neural network architecture known as learning vector quantization. Single beam echo-sounding backscatter waveform data from three different seafloors of the western...

  13. Fast full waveform inversion with source encoding and second-order optimization methods

    Science.gov (United States)

    Castellanos, Clara; Métivier, Ludovic; Operto, Stéphane; Brossier, Romain; Virieux, Jean

    2015-02-01

    Full waveform inversion (FWI) of 3-D data sets has recently been possible thanks to the development of high performance computing. However, FWI remains a computationally intensive task when high frequencies are injected in the inversion or more complex wave physics (viscoelastic) is accounted for. The highest computational cost results from the numerical solution of the wave equation for each seismic source. To reduce the computational burden, one well-known technique is to employ a random linear combination of the sources, rather that using each source independently. This technique, known as source encoding, has shown to successfully reduce the computational cost when applied to real data. Up to now, the inversion is normally carried out using gradient descent algorithms. With the idea of achieving a fast and robust frequency-domain FWI, we assess the performance of the random source encoding method when it is interfaced with second-order optimization methods (quasi-Newton l-BFGS, truncated Newton). Because of the additional seismic modelings required to compute the Newton descent direction, it is not clear beforehand if truncated Newton methods can indeed further reduce the computational cost compared to gradient algorithms. We design precise stopping criteria of iterations to fairly assess the computational cost and the speed-up provided by the source encoding method for each optimization method. We perform experiment on synthetic and real data sets. In both cases, we confirm that combining source encoding with second-order optimization methods reduces the computational cost compared to the case where source encoding is interfaced with gradient descent algorithms. For the synthetic data set, inspired from the geology of Gulf of Mexico, we show that the quasi-Newton l-BFGS algorithm requires the lowest computational cost. For the real data set application on the Valhall data, we show that the truncated Newton methods provide the most robust direction of descent.

  14. Wave reflection quantification based on pressure waveforms alone--methods, comparison, and clinical covariates.

    Science.gov (United States)

    Hametner, Bernhard; Wassertheurer, Siegfried; Kropf, Johannes; Mayer, Christopher; Holzinger, Andreas; Eber, Bernd; Weber, Thomas

    2013-03-01

    Within the last decade the quantification of pulse wave reflections mainly focused on measures of central aortic systolic pressure and its augmentation through reflections based on pulse wave analysis (PWA). A complementary approach is the wave separation analysis (WSA), which quantifies the total amount of arterial wave reflection considering both aortic pulse and flow waves. The aim of this work is the introduction and comparison of aortic blood flow models for WSA assessment. To evaluate the performance of the proposed modeling approaches (Windkessel, triangular and averaged flow), comparisons against Doppler measurements are made for 148 patients with preserved ejection fraction. Stepwise regression analysis between WSA and PWA parameters are performed to provide determinants of methodological differences. Against Doppler measurement mean difference and standard deviation of the amplitudes of the decomposed forward and backward pressure waves are comparable for Windkessel and averaged flow models. Stepwise regression analysis shows similar determinants between Doppler and Windkessel model only. The results indicate that the Windkessel method provides accurate estimates of wave reflection in subjects with preserved ejection fraction. The comparison with waveforms derived from Doppler ultrasound as well as recently proposed simple triangular and averaged flow waves showed that this approach may reduce variability and provide realistic results.

  15. Inverse polynomial reconstruction method in DCT domain

    Science.gov (United States)

    Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen

    2012-12-01

    The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.

  16. Frequency sweep of the field scattered by an inhomogeneous structure using method of moments and asymptotic waveform evaluation

    DEFF Research Database (Denmark)

    Troelsen, Jens; Meincke, Peter; Breinbjerg, Olav

    2000-01-01

    In many radar applications it is necessary to determine the scattering from an object over a wide frequency band. The asymptotic waveform evaluation (AWE), which is a moment matching (MM) technique, constitutes a method to this end. In general, MM techniques provide a reduced-order model of a fun......In many radar applications it is necessary to determine the scattering from an object over a wide frequency band. The asymptotic waveform evaluation (AWE), which is a moment matching (MM) technique, constitutes a method to this end. In general, MM techniques provide a reduced-order model...... into account. To the knowledge of the authors the AWE technique has not previously been applied to a MoM solution based on this kind of integral equation. It is the purpose of this paper to investigate the use of the AWE technique as a tool to obtain a fast frequency sweep of the field scattered...

  17. Spatial methods for event reconstruction in CLEAN

    CERN Document Server

    Coakley, K J; Coakley, Kevin J.; Kinsey, Daniel N. Mc

    2004-01-01

    In CLEAN (Cryogenic Low Energy Astrophysics with Noble gases), a proposed neutrino and dark matter detector, background discrimination is possible if one can determine the location of an ionizing radiation event with high accuracy. We simulate ionizing radiation events that produce multiple scintillation photons within a spherical detection volume filled with liquid neon. We estimate the radial location of a particular ionizing radiation event based on the observed count data corresponding to that event. The count data are collected by detectors mounted at the spherical boundary of the detection volume. We neglect absorption, but account for Rayleigh scattering. To account for wavelength-shifting of the scintillation light, we assume that photons are absorbed and re-emitted at the detectors. Here, we develop spatial Maximum Likelihood methods for event reconstruction, and study their performance in computer simulation experiments. We also study a method based on the centroid of the observed count data. We cal...

  18. Searching non-impulsive earthquakes using a full-waveform, cross-correlation detection method.

    Science.gov (United States)

    alinne solano, ericka; Hjorleifsdottir, Vala

    2016-04-01

    Some seismic events, which have low P-wave amplitude, pass undetected by regional or global networks. A subset of these events occur due to fast mass movement as in the case of rapid glacial movements (Ekström, et al., 2003; Ekström, et al., 2006) or landslides (Ekstrom and Stark, 2013). Some other events depleted in high frequencies are related to volcanic activity (e.g. Schuler and Ekstrom, 2009) or to non-volcanic tremors (Obara, 2002). Furthermore, non-impulsive earthquakes have been located on oceanic transform faults (OTF) (Abercrombie and Ekstrom, 2001). A suite of methods can be used to detect these non-impulsive events. Correlation, matched filter, or template event methods (e.g. Schaff and Waldhauser 2010; Rubinstein & Beroza 2007) are very efficient for detecting smaller events occurring in a similar place and with the same mechanism as a larger template event. One such method (Ekström, 2006), that is applied on the scale of the globe, routinely detects events with magnitudes around Mw 5 and larger. In this work we want to lower the detection threshold by using shorter period records registered by regional networks together with a full-waveform detection method based on time reversal schemes (Solano, et al., in prep.). The method uses continuous observed seismograms, together with moment tensor responses calculated for a 3D structure. Looking for events on the East Pacific Rise (EPR) around 9 N in one month of data from the National Seismological broadband Network (Servicio Sismologico Nacional, SSN), we found one new event. However, we also had 435 false detections due to high noise levels at several stations, gaps in the data or detection of teleseismic phases. To manually discard these events is a time consuming task that should be automated. We are working on several strategies, including weighting the input traces by their signal to noise ratio, correlation of a template peak associated to the detection function and the coincidence in time of the

  19. Reconstruction of Dispersive Lamb Waves in Time Plates Using a Time Reversal Method

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo [Wonkwang University, Iksan (Korea, Republic of)

    2008-02-15

    Time reversal (TR) of nondispersive body waves has been used in many applications including ultrasonic NDE. However, the study of the TR method for Lamb waves on thin structures is not well established. In this paper, the full reconstruction of the input signal is investigated for dispersive Lamb waves by introducing a time reversal operator based on the Mindlin plate theory. A broadband and a narrowband input waveform are employed to reconstruct the A{sub 0} mode of Lamb wave propagations. Due to the frequency dependence of the TR process of Lamb waves, different frequency components of the broadband excitation are scaled differently during the time reversal process and the original input signal cannot be fully restored. This is the primary reason for using a narrowband excitation to enhance the flaw detectability

  20. Image-reconstruction methods in positron tomography

    CERN Document Server

    Townsend, David W; CERN. Geneva

    1993-01-01

    Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...

  1. A Concealed Car Extraction Method Based on Full-Waveform LiDAR Data

    Directory of Open Access Journals (Sweden)

    Chuanrong Li

    2016-01-01

    Full Text Available Concealed cars extraction from point clouds data acquired by airborne laser scanning has gained its popularity in recent years. However, due to the occlusion effect, the number of laser points for concealed cars under trees is not enough. Thus, the concealed cars extraction is difficult and unreliable. In this paper, 3D point cloud segmentation and classification approach based on full-waveform LiDAR was presented. This approach first employed the autocorrelation G coefficient and the echo ratio to determine concealed cars areas. Then the points in the concealed cars areas were segmented with regard to elevation distribution of concealed cars. Based on the previous steps, a strategy integrating backscattered waveform features and the view histogram descriptor was developed to train sample data of concealed cars and generate the feature pattern. Finally concealed cars were classified by pattern matching. The approach was validated by full-waveform LiDAR data and experimental results demonstrated that the presented approach can extract concealed cars with accuracy more than 78.6% in the experiment areas.

  2. Implementational Improvements of the early warning method based on P-wave waveform envelope function with an application to Korea

    Science.gov (United States)

    Heo, T.; Kim, J.

    2016-12-01

    From recent earthquakes, it is observed the production in high-tech industrial plants can be affected significantly even by a weak earthquake ground shaking. This kind of risk may be mitigated by building an earthquake early warning system. In order to be effective, the warning should be issued within few seconds after the occurrence of an earthquake, which is a daunting task. So far there have been developed several warning systems. Among them, a system based on P-wave waveform envelope function utilizing a single station data appears to be very promising. This method estimates the epicentral distance and magnitude from the initial part of the P-wave waveform using the relationships between waveform envelope parameters and seismic parameters. The system employed by Japan Meteorological Agency uses the relationships obtained from the data of earthquakes with magnitudes larger than 5. In this study, however, we attempted to extend the method to the earthquakes as small as magnitude 3 in order to implement to Korea of moderate seismicity. In total, 1,586 records from earthquakes of magnitude between 3 and 5.2 are analyzed. The epicentral distances of these records are less than 140km. The reliability of the prediction of epicenter is found to be very dependent on the accurate picking of P-wave arrival time from a record. Compared with the existing method, a significant improvement is achieved in identifying P-wave arrival time by analyzing the wave in 2-dimensional horizontal plane instead of analyzing in each orthogonal direction, by tracking waveform of which amplitude exceeds the noise level and by utilizing the continuity of the waveform. It enabled us to estimate accurately the direction to the epicenter. To estimate the epicentral distance, we used, as a parameter, the slope from the initial point to the maximum of the envelope function instead of the power of exponential envelope function. Consequently, the location of epicenter can be predicted very

  3. Hybrid Method for Tokamak MHD Equilibrium Configuration Reconstruction

    Institute of Scientific and Technical Information of China (English)

    HE Hong-Da; DONG Jia-Qi; ZHANG Jin-Hua; JIANG Hai-Bin

    2007-01-01

    A hybrid method for tokamak MHD equilibrium configuration reconstruction is proposed and employed in the modified EFIT code. This method uses the free boundary tokamak equilibrium configuration reconstruction algorithm with one boundary point fixed. The results show that the position of the fixed point has explicit effects on the reconstructed divertor configurations. In particular, the separatrix of the reconstructed divertor configuration precisely passes the required position when the hybrid method is used in the reconstruction. The profiles of plasma parameters such as pressure and safety factor for reconstructed HL-2A tokamak configurations with the hybrid and the free boundary methods are compared. The possibility for applications of the method to swing the separatrix strike point on the divertor target plate is discussed.

  4. Method to synthesize polynomial current waveforms and intensity compensation functions for DFB lasers in digital sweep integration gas analyzers.

    Science.gov (United States)

    Kidd, Gary

    2002-09-01

    With analysis methods using digital sweep integration of the absorbance function, linear current ramps can produce non-linear laser intensity and wavenumber functions from distributed feedback lasers. These non-linear functions produce offset and gain errors in mole fraction estimates on tunable diode laser gas analyzers. A method is described to synthesize polynomial current waveforms and laser intensity compensation functions to give linear wavenumber functions and to minimize the offset error. Quantitative and qualitative results are presented to evaluate reduction in mole fraction errors.

  5. A new target reconstruction method considering atmospheric refraction

    Science.gov (United States)

    Zuo, Zhengrong; Yu, Lijuan

    2015-12-01

    In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.

  6. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    Energy Technology Data Exchange (ETDEWEB)

    Hellebust, Taran Paulsen [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Tanderup, Kari [Department of Oncology, Aarhus University Hospital, Aarhus (Denmark); Bergstrand, Eva Stabell [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Knutsen, Bjoern Helge [Department of Medical Physics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Roeislien, Jo [Section of Biostatistics, Rikshospital-Radiumhospital Medical Center, Oslo (Norway); Olsen, Dag Rune [Institute for Cancer Research, Rikshospital-Radiumhospital Medical Center, Oslo (Norway)

    2007-08-21

    The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy.

  7. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  8. NEW VISUAL METHOD FOR FREE-FORM SURFACE RECONSTRUCTION

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new method is put forward combining computer vision with computer aided geometric design (CAGD) to resolve the problem of free-form surface reconstruction. The surface is first subdivided into N-sided Gregory patches, and a stereo algorithm is used to reconstruct the boundary curves. Then, the cross boundary tangent vectors are computed through reflectance analysis. At last, the whole surface can be reconstructed jointing these patches with G1 continuity(tangent continuity). Examples on synthetic images are given.

  9. A New Wave Equation Based Source Location Method with Full-waveform Inversion

    KAUST Repository

    Wu, Zedong

    2017-05-26

    Locating the source of a passively recorded seismic event is still a challenging problem, especially when the velocity is unknown. Many imaging approaches to focus the image do not address the velocity issue and result in images plagued with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using an extended source. At the later stages the focusing of the image dominates the inversion allowing for high resolution source and velocity inversion. We also compute the source location explicitly and numerical tests show that we obtain good estimates of the source locations with this approach.

  10. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  11. Limb reconstruction with the Ilizarov method

    NARCIS (Netherlands)

    Oostenbroek, H.J.

    2014-01-01

    In chapter 1, the background and origins of this study are explained. The aims of the study are defined. In chapter 2, an analysis of the complications rate of limb reconstruction in a cohort of 37 consecutive growing children was done. Several patient and deformity factors were investigated by logi

  12. A new method to reduce the statistical and systematic uncertainty of chance coincidence backgrounds measured with waveform digitizers

    CERN Document Server

    O'Donnell, J M

    2016-01-01

    A new method for measuring chance-coincidence backgrounds during the collection of coincidence data is presented. The method relies on acquiring data with near-zero dead time, which is now realistic due to the increasing deployment of flash electronic-digitizer (waveform digitizer) techniques. An experiment designed to use this new method is capable of acquiring more coincidence data, and a much reduced statistical fluctuation of the measured background. A statistical analysis is presented, and used to derive a figure of merit for the new method. Factors of four improvement over other analyses are realistic. The technique is illustrated with preliminary data taken as part of a program to make new measurements of the prompt fission neutron spectra at Los Alamos Neutron Science Center. It is expected that the these measurements will occur in a regime where the maximum figure of merit will be exploited.

  13. Compressive measurement and feature reconstruction method for autonomous star trackers

    Science.gov (United States)

    Yin, Hang; Yan, Ye; Song, Xin; Yang, Yueneng

    2016-12-01

    Compressive sensing (CS) theory provides a framework for signal reconstruction using a sub-Nyquist sampling rate. CS theory enables the reconstruction of a signal that is sparse or compressible from a small set of measurements. The current CS application in optical field mainly focuses on reconstructing the original image using optimization algorithms and conducts data processing in full-dimensional image, which cannot reduce the data processing rate. This study is based on the spatial sparsity of star image and proposes a new compressive measurement and reconstruction method that extracts the star feature from compressive data and directly reconstructs it to the original image for attitude determination. A pixel-based folding model that preserves the star feature and enables feature reconstruction is presented to encode the original pixel location into the superposed space. A feature reconstruction method is then proposed to extract the star centroid by compensating distortions and to decode the centroid without reconstructing the whole image, which reduces the sampling rate and data processing rate at the same time. The statistical results investigate the proportion of star distortion and false matching results, which verifies the correctness of the proposed method. The results also verify the robustness of the proposed method to a great extent and demonstrate that its performance can be improved by sufficient measurement in noise cases. Moreover, the result on real star images significantly ensures the correct star centroid estimation for attitude determination and confirms the feasibility of applying the proposed method in a star tracker.

  14. Application of mathematical modelling methods for acoustic images reconstruction

    Science.gov (United States)

    Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.

    2016-04-01

    The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.

  15. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  16. The optimized gradient method for full waveform inversion and its spectral implementation

    KAUST Repository

    Wu, Zedong

    2016-03-28

    At the heart of the full waveform inversion (FWI) implementation is wavefield extrapolation, and specifically its accuracy and cost. To obtain accurate, dispersion free wavefields, the extrapolation for modelling is often expensive. Combining an efficient extrapolation with a novel gradient preconditioning can render an FWI implementation that efficiently converges to an accurate model. We, specifically, recast the extrapolation part of the inversion in terms of its spectral components for both data and gradient calculation. This admits dispersion free wavefields even at large extrapolation time steps, which improves the efficiency of the inversion. An alternative spectral representation of the depth axis in terms of sine functions allows us to impose a free surface boundary condition, which reflects our medium boundaries more accurately. Using a newly derived perfectly matched layer formulation for this spectral implementation, we can define a finite model with absorbing boundaries. In order to reduce the nonlinearity in FWI, we propose a multiscale conditioning of the objective function through combining the different directional components of the gradient to optimally update the velocity. Through solving a simple optimization problem, it specifically admits the smoothest approximate update while guaranteeing its ascending direction. An application to the Marmousi model demonstrates the capability of the proposed approach and justifies our assertions with respect to cost and convergence.

  17. Anatomically-aided PET reconstruction using the kernel method

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  18. Review of Josephson Waveform Synthesis and Possibility of New Operation Method by Multibit Delta-Sigma Modulation and Thermometer Code for Its Further Advancement

    Science.gov (United States)

    Kaneko, Nobu-hisa; Maruyama, Michitaka; Urano, Chiharu; Kiryu, Shogo

    2012-01-01

    A method of AC waveform synthesis with quantum-mechanical accuracy has been developed on the basis of the Josephson effect in national metrology institutes, not only for its scientific interest but its potential benefit to industries. In this paper, we review the development of Josephson arbitrary waveform synthesizers based on the two types of Josephson junction array and their distinctive driving methods. We also discuss a new operation technique with multibit delta-sigma modulation and a thermometer code, which possibly enables the generation of glitch-free waveforms with high voltage levels. A Josephson junction array for this method has equally weighted branches that are operated by thermometer-coded bias current sources with multibit delta-sigma conversion.

  19. Landweber Iterative Methods for Angle-limited Image Reconstruction

    Institute of Scientific and Technical Information of China (English)

    Gang-rong Qu; Ming Jiang

    2009-01-01

    We introduce a general itcrative scheme for angle-limited image reconstruction based on Landwe-ber's method. We derive a representation formula for this scheme and consequently establish its convergence conditions. Our results suggest certain relaxation strategies for an accelerated convergcnce for angle-limited im-age reconstruction in L2-norm comparing with alternative projection methods. The convolution-backprojection algorithm is given for this iterative process.

  20. Alternative method for reconstruction of antihydrogen annihilation vertices

    Science.gov (United States)

    Amole, C.; Ashkezari, M. D.; Andresen, G. B.; Baquero-Ruiz, M.; Bertsche, W.; Bowe, P. D.; Butler, E.; Cesar, C. L.; Chapman, S.; Charlton, M.; Deller, A.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayano, R. S.; Hayden, M. E.; Humphries, A. J.; Hydomako, R.; Jonsell, S.; Kurchaninov, L.; Madsen, N.; Menary, S.; Nolan, P.; Olchanski, K.; Olin, A.; Povilus, A.; Pusa, P.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Storey, J. W.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Yamazaki, Y.

    The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.

  1. Reconstruction-classification method for quantitative photoacoustic tomography

    CERN Document Server

    Malone, Emma; Cox, Ben T; Arridge, Simon R

    2015-01-01

    We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in 2 and 3 dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches.

  2. Alternative method for reconstruction of antihydrogen annihilation vertices

    CERN Document Server

    Amole, C; Andresen , G B; Baquero-Ruiz, M; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jonsell, S; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki,Y

    2012-01-01

    The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.

  3. Fast MR Spectroscopic Imaging Technologies and Data Reconstruction Methods

    Institute of Scientific and Technical Information of China (English)

    HUANGMin; LUSong-tao; LINJia-rui; ZHANYing-jian

    2004-01-01

    MRSI plays a more and more important role in clinical application. In this paper, we compare several fast MRSI technologies and data reconstruction methods. For the conventional phase encoding MRSI, the data reconstruction using FFT is simple. But the data acquisition is very time consuming and thus prohibitive in clinical settings. Up to now, the MRSI technologies based on echo-planar, spiral trajectories and sensitivity encoding are the fastest in data acquisition, but their data reconstruction is complex. EPSI reconstruction uses shift of odd and even echoes. Spiral SI uses gridding FFT. SENSE-SI, a new approach to reducing the acquisition time, uses the distinct spatial sensitivities of the individual coil elements to recover the missing encoding information. These improvements in data acquisition and image reconstruction provide a potential value of metabolic imaging as a clinical tool.

  4. A flux pumping method applied to the magnetization of YBCO superconducting coils: frequency, amplitude and waveform characteristics

    Science.gov (United States)

    Fu, Lin; Matsuda, Koichi; Lecrevisse, Thibault; Iwasa, Yukikazu; Coombs, Tim

    2016-04-01

    This letter presents a flux pumping method and the results gained when it was used to magnetize a range of different YBCO coils. The pumping device consists of an iron magnetic circuit with eight copper coils which apply a traveling magnetic field to the superconductor. The copper poles are arranged vertically with an air gap length of 1 mm and the iron cores are made of laminated electric steel plates to minimize eddy-current losses. We have used this arrangement to investigate the best possible pumping result when parameters such as frequency, amplitude and waveform are varied. We have successfully pumped current into the superconducting coil up to a value of 90% of I c and achieved a resultant magnetic field of 1.5 T.

  5. Reconstruction methods for phase-contrast tomography

    Energy Technology Data Exchange (ETDEWEB)

    Raven, C.

    1997-02-01

    Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.

  6. Geometric reconstruction methods for electron tomography

    CERN Document Server

    Alpers, Andreas; König, Stefan; Pennington, Robert S; Boothroyd, Chris B; Houben, Lothar; Dunin-Borkowski, Rafal E; Batenburg, Kees Joost

    2012-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and nonlinear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full $180^\\circ$ tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce four algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire.

  7. Comparison of Force Reconstruction Methods for a Lumped Mass Beam

    Directory of Open Access Journals (Sweden)

    Vesta I. Bateman

    1997-01-01

    Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.

  8. Reconstruction of 3-D digital cores using a hybrid method

    Institute of Scientific and Technical Information of China (English)

    Liu Xuefeng; Sun Jianmeng; Wang Haitao

    2009-01-01

    A 3-D digital core describes the pore space microstructure of rocks. An X-ray micro CT scan is the most accurate and direct but costly method to obtain a 3-D digital core. In this study, we propose a hybrid method which combines sedimentation simulation and simulated annealing (SA) method to generate 3-D digital cores based on 2-D images of rocks. The method starts with the sedimentation simulation to build a 3-D digital core, which is the initial configuration for the SA method. We update the initial digital core using the SA method to match the auto-correlation function of the 2-D rock image and eventually build the final 3-D digital core. Compared with the typical SA method, the hybrid method has significantly reduced the computation time. Local porosity theory is applied to quantitatively compare the reconstructed 3-D digital cores with the X-ray micro CT 3-D images. The results indicate that the 3-D digital cores reconstructed by the hybrid method have homogeneity and geometric connectivity similar to those of the X-ray micro CT image. The formation factors and permeabilities of the reconstructed 3-D digital cores are estimated using the finite element method (FEM) and lattice Boltzmann method (LBM), respectively. The simulated results are in good agreement with the experimental measurements. Comparison of the simulation results suggests that the digital cores reconstructed by the hybrid method more closely reflect the true transport properties than the typical SA method alone.

  9. A robust method for pulse peak determination in a digital volume pulse waveform with a wandering baseline.

    Science.gov (United States)

    Jang, Dae-Geun; Farooq, Umar; Park, Seung-Hun; Hahn, Minsoo

    2014-10-01

    This paper presents a robust method for pulse peak determination in a digital volume pulse (DVP) waveform with a wandering baseline. A proposed new method uses a modified morphological filter (MMF) to eliminate a wandering baseline signal of the DVP signal with minimum distortion and a slope sum function (SSF) with an adaptive thresholding scheme to detect pulse peaks from the baseline-removed DVP signal. Further in order to cope with over-detected and missed pulse peaks, knowledge based rules are applied as a postprocessor. The algorithm automatically adjusts detection parameters periodically to adapt to varying beat morphologies and fluctuations. Compared with conventional methods (highpass filtering, linear interpolation, cubic spline interpolation, and wavelet adaptive filtering), our method performs better in terms of the signal-to-error ratio, the computational burden (0.125 seconds for one minute of DVP signal analysis with the Intel Core 2 Quad processor @ 2.40 GHz PC), the true detection rate (97.32% with an acceptance level of 4 ms ) as well as the normalized error rate (0.18%). In addition, the proposed method can detect true positions of pulse peaks more accurately and becomes very useful for pulse transit time (PTT) and pulse rate variability (PRV) analyses.

  10. Digital functions and data reconstruction digital-discrete methods

    CERN Document Server

    Chen, Li M

    2012-01-01

    Digital Functions and Data Reconstruction: Digital-Discrete Methods provides a solid foundation to the theory of digital functions and its applications to image data analysis, digital object deformation, and data reconstruction. This new method has a unique feature in that it is mainly built on discrete mathematics with connections to classical methods in mathematics and computer sciences. Digitally continuous functions and gradually varied functions were developed in the late 1980s. A. Rosenfeld (1986) proposed digitally continuous functions for digital image analysis, especially to describe

  11. A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Oktay Büyükaşık

    2010-12-01

    Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31

  12. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    DEFF Research Database (Denmark)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell;

    2007-01-01

    The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring...

  13. Full Waveform Inversion Using Waveform Sensitivity Kernels

    Science.gov (United States)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  14. Fast alternating projection methods for constrained tomographic reconstruction.

    Science.gov (United States)

    Liu, Li; Han, Yongxin; Jin, Mingwu

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.

  15. Multiscale viscoacoustic waveform inversion with the second generation wavelet transform and adaptive time-space domain finite-difference method

    Science.gov (United States)

    Ren, Zhiming; Liu, Yang; Zhang, Qunshan

    2014-05-01

    Full waveform inversion (FWI) has the potential to provide preferable subsurface model parameters. The main barrier of its applications to real seismic data is heavy computational amount. Numerical modelling methods are involved in both forward modelling and backpropagation of wavefield residuals, which spend most of computational time in FWI. We develop a time-space domain finite-difference (FD) method and adaptive variable-length spatial operator scheme in numerical simulation of viscoacoustic equation and extend them into the viscoacoustic FWI. Compared with conventional FD methods, different operator lengths are adopted for different velocities and quality factors, which can reduce the amount of computation without reducing accuracy. Inversion algorithms also play a significant role in FWI. In conventional single-scale methods, it is likely to converge to local minimums especially when the initial model is far from the real model. To tackle the problem, we introduce the second generation wavelet transform to implement the multiscale FWI. Compared to other multiscale methods, our method has advantages of ease of implementation and better time-frequency local analysis ability. The L2 norm is widely used in FWI and gives invalid model estimates when the data is contaminated with strong non-uniform noises. We apply the L1-norm and the Huber-norm criteria in the time-domain FWI to improve its antinoise ability. Our strategies have been successfully applied in synthetic experiments to both onshore and offshore reflection seismic data. The results of the viscoacoustic Marmousi example indicate that our new FWI scheme consumes smaller computer resources. In addition, the viscoacoustic Overthrust example shows its better convergence and more reasonable velocity and quality factor structures. All these results demonstrate that our method can improve inversion accuracy and computational efficiency of FWI.

  16. A marked point process for modeling lidar waveforms.

    Science.gov (United States)

    Mallet, Clément; Lafarge, Florent; Roux, Michel; Soergel, Uwe; Bretar, Frédéric; Heipke, Christian

    2010-12-01

    Lidar waveforms are 1-D signals representing a train of echoes caused by reflections at different targets. Modeling these echoes with the appropriate parametric function is useful to retrieve information about the physical characteristics of the targets. This paper presents a new probabilistic model based upon a marked point process which reconstructs the echoes from recorded discrete waveforms as a sequence of parametric curves. Such an approach allows to fit each mode of a waveform with the most suitable function and to deal with both, symmetric and asymmetric, echoes. The model takes into account a data term, which measures the coherence between the models and the waveforms, and a regularization term, which introduces prior knowledge on the reconstructed signal. The exploration of the associated configuration space is performed by a reversible jump Markov chain Monte Carlo (RJMCMC) sampler coupled with simulated annealing. Experiments with different kinds of lidar signals, especially from urban scenes, show the high potential of the proposed approach. To further demonstrate the advantages of the suggested method, actual laser scans are classified and the results are reported.

  17. Methods of correlating electropenetrography waveform data to Hemipteran probing behavior and pathogen transmission

    Science.gov (United States)

    Hemipteran feeding behavior cannot be visualized within plant tissues by researchers studying probing and/or transmission attributes of some economically important plant pathogens transmitted by these piercing sucking insects. Electropenetrography (EPG) is currently the most precise method for study...

  18. Bubble reconstruction method for wire-mesh sensors measurements

    Science.gov (United States)

    Mukin, Roman V.

    2016-08-01

    A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is

  19. Waveform synthesis of surface waves in a laterally heterogeneous earth by the Gaussian beam method

    Science.gov (United States)

    Yomogida, K.; Aki, K.

    1985-01-01

    The present investigation is concerned with an application of the Gaussian beam method to surface waves in the laterally heterogeneous earth. The employed method has been developed for ray tracing and synthesizing seismograms of surface waves in cases involving the laterally heterogeneous earth. The procedure is based on formulations derived by Yomogida (1985). Vertical structure of the wave field is represented by the eigenfunctions of normal mode theory, while lateral variation is expressed by the parabolic equation as in two-dimensional acoustic waves or elastic body waves. It is demonstrated that a large-amplitude change can result from a slight perturbation in the phase velocity model.

  20. Methods and studies of tongue reconstruction

    Institute of Scientific and Technical Information of China (English)

    Fahmi A. Numan; LIAO Gui-qing

    2007-01-01

    Total and even partial glossectomy could be a major event in the life of a patient. Tongue function is so complicated which makes maintaining normal functions of the tongue such as swallowing and speech and preserving larynx integrity after the surgery is a primary objective of the surgeon. This task is very difficult and the result is not predictable. Recent years, however, there has been interesting developments in microsurgical techniques, and these advancements enable oral and maxillofacial surgeons to achieve better results and improve the quality of their patient's life. The results even with use of the new technology are still far from perfect. Several reasons may cause variation in the result. Some of them have to do with the patient such as general health and other reasons are due to the method that is used and nature of the defect after the removal of the tumor. This article was undertaken to summarize the various methods and techniques used over the years to restore oral tongue functions after defects.

  1. A novel method to detect accidental oesophageal intubation based on ventilation pressure waveforms

    NARCIS (Netherlands)

    Kalmar, Alain F.; Absalom, Anthony; Monsieurs, Koenraad G.

    2012-01-01

    Background: Emergency endotracheal intubation results in accidental oesophageal intubation in up to 17% of patients. This is frequently undetected thereby adding to the morbidity and mortality. No current method to detect accidental oesophageal intubation in an emergency setting is both highly sensi

  2. Unreported seismic events found far off-shore Mexico using full-waveform, cross-correlation detection method.

    Science.gov (United States)

    Solano, ErickaAlinne; Hjorleifsdottir, Vala; Perez-Campos, Xyoli

    2015-04-01

    A large subset of seismic events do not have impulsive arrivals, such as low frequency events in volcanoes, earthquakes in the shallow part of the subduction interface and further down dip from the traditional seismogenic part, glacial events, volcanic and non-volcanic tremors and landslides. A suite of methods can be used to detect these non-impulsive events. One of this methods is the full-waveform detection based on time reversal methods (Solano, et al , submitted to GJI). The method uses continuous observed seismograms, together with Greens functions and moment tensor responses calculated for an arbitrary 3D structure. This method was applied to the 2012 Ometepec-Pinotepa Nacional earthquake sequence in Guerrero, Mexico. During the span time of the study, we encountered three previously unknown events. One of this events was an impulsive earthquake in the Ometepec area, that only has clear arrivals on three stations and was therefore not located and reported by the SSN. The other two events are previously undetected events, very depleted in high frequencies, that occurred far outside the search area. A very rough estimate gives the location of this two events in the portion of the East Pacific Rise around 9 N. These two events are detected despite their distance from the search area, due to favorable move-out on the array of the Mexican National Seismological Service network (SSN). We are expanding the study area to the EPR and to a larger period of time, with the objective of finding more events in that region. We will present an analysis of the newly detected events, as well as any further findings at the meeting.

  3. Reconstruction method for curvilinear structures from two views

    Science.gov (United States)

    Hoffmann, Matthias; Brost, Alexander; Jakob, Carolin; Koch, Martin; Bourier, Felix; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2013-03-01

    Minimally invasive interventions often involve tools of curvilinear shape like catheters and guide-wires. If the camera parameters of a fluoroscopic system or a stereoscopic endoscope are known, a 3-D reconstruction of corresponding points can be computed by triangulation. Manual identification of point correspondences is time consuming, but there exist methods that automatically select corresponding points along curvilinear structures. The focus here is on the evaluation of a recent published method for catheter reconstruction from two views. A previous evaluation of this method using clinical data yielded promising results. For that evaluation, however, no 3-D ground truth data was available such that the error could only be estimated using the forward-projection of the reconstruction. In this paper, we present a more extensive evaluation of this method based on both clinical and phantom data. For the evaluation using clinical images, 36 data sets and two different catheters were available. The mean error found when reconstructing both catheters was 0.1mm +/- 0.1mm. To evaluate the error in 3-D, images of a phantom were acquired from 13 different angulations. For the phantom, A 3D C-arm CT voxel data set of the phantom was also available. A reconstruction error was calculated by comparing the triangulated 3D reconstruction result to the 3D voxel data set. The evaluation yielded an average error of 1.2mm +/- 1.2mm for the circumferential mapping catheter and 1.3mm +/- 1.0mm for the ablation catheter.

  4. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  5. The equivalent source method as a sparse signal reconstruction

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Xenaki, Angeliki

    2015-01-01

    This study proposes an acoustic holography method for sound field reconstruction based on a point source model, which uses the Compressed Sensing (CS) framework to provide a sparse solution. Sparsity implies that the sound field can be represented by a minimal number of non-zero terms, point...

  6. Robust Methods for Sensing and Reconstructing Sparse Signals

    Science.gov (United States)

    Carrillo, Rafael E.

    2012-01-01

    Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…

  7. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  8. 3D reconstruction methods of coronal structures by radio observations

    Science.gov (United States)

    Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.

    1992-01-01

    The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.

  9. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...... and the stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; in the former case we provide a new ‘‘training’’ algorithm that finds the optimal parameter for a given test problem. The stopping rules provided are the discrepancy principle, the monotone error rule, and the NCP...

  10. Optimization of the ship type using waveform by means of Rankine source method; Rankine source ho ni yoru hakei wo mochiita funagata saitekika ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, A.; Eguchi, T. [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)

    1996-04-10

    Among the numerical calculation methods for steady-state wave-making problems, the panel shift Rankine source (PSRS) method has the advantages of rather precise determination of wave pattern of practical ship types, and short calculation period. The wave pattern around the hull was calculated by means of the PSRS method. The waveform analysis was carried out for the wave, to obtain an amplitude function of the original ship type. Based on the amplitude function, a ship type improvement method aiming at the optimization of ship type was provided using a conditional calculus of variation. A Series 60 (Cb=0.6) ship type was selected for the ship type improvement, to apply this technique. It was suggested that optimum design can be made for reducing the wave making resistance by means of this method. For the improvement of Series 60 ship type using this method, a great degree of reduction in the wave making resistance was recognized from the results of numerical waveform analysis. It was suggested that the ship type improvement aiming at the reduction of wave-making resistance can be made in shorter period and by smaller labor compared with the method using a waveform analysis of cistern tests. 5 refs., 9 figs.

  11. Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2014-01-01

    Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.

  12. Parallel Algorithm in Surface Wave Waveform Inversion

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In Surface wave waveform inversion, we want to reconstruct 3Dshear wav e velocity structure, which calculation beyond the capability of the powerful pr esent day personal computer or even workstation. So we designed a high parallele d algorithm and carried out the inversion on Parallel computer based on the part itioned waveform inversion (PWI). It partitions the large scale optimization pro blem into a number of independent small scale problems and reduces the computati onal effort by several orders of magnitude. We adopted surface waveform inversio n with a equal block(2°×2°) discretization.

  13. Ultrasound Tomography in Circular Measurement Configuration using Nonlinear Reconstruction Method

    Directory of Open Access Journals (Sweden)

    Tran Quang-Huy

    2015-12-01

    Full Text Available Ultrasound tomography offers the potential for detecting of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation. Based on inverse scattering technique, this imaging modality uses some material properties such as sound contrast and attenuation in order to detect small objects. One of the most commonly used methods in ultrasound tomography is the Distorted Born Iterative Method (DBIM. The compressed sensing technique was applied in the DBIM as a promising approach for the image reconstruction quality improvement. Nevertheless, the random measurement configuration of transducers in this method is very difficult to set up in practice. Therefore, in this paper, we take advantages of simpler sparse uniform measurement configuration set-up of transducers and high-quality image reconstruction of 1 non-linear regularization in sparse scattering domain. The simulation results demonstrate the high performance of the proposed approach in terms of tremendously reduced total runtime and normalized error.

  14. Efficient ghost cell reconstruction for embedded boundary methods

    Science.gov (United States)

    Rapaka, Narsimha; Al-Marouf, Mohamad; Samtaney, Ravi

    2016-11-01

    A non-iterative linear reconstruction procedure for Cartesian grid embedded boundary methods is introduced. The method exploits the inherent geometrical advantage of the Cartesian grid and employs batch sorting of the ghost cells to eliminate the need for an iterative solution procedure. This reduces the computational cost of the reconstruction procedure significantly, especially for large scale problems in a parallel environment that have significant communication overhead, e.g., patch based adaptive mesh refinement (AMR) methods. In this approach, prior computation and storage of the weightage coefficients for the neighbour cells is not required which is particularly attractive for moving boundary problems and memory intensive stationary boundary problems. The method utilizes a compact and unique interpolation stencil but also provides second order spatial accuracy. It provides a single step/direct reconstruction for the ghost cells that enforces the boundary conditions on the embedded boundary. The method is extendable to higher order interpolations as well. Examples that demonstrate the advantages of the present approach are presented. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.

  15. A new method for analyzing auditory brain-stem response waveforms using a moving-minimum subtraction procedure of digitized analog recordings

    Directory of Open Access Journals (Sweden)

    Källstr

    2014-06-01

    Full Text Available Johan Källstrand,1 Tommy Lewander,2 Eva Baghdassarian,2,3 Sören Nielzén4 1SensoDetect AB, Lund, 2Department of Neuroscience, Medical Faculty, Uppsala University, 3Department of Psychiatry, Uppsala University Hospital, Uppsala, 4Department of Psychiatry, Medical Faculty, University of Lund, Lund, Sweden Abstract: The auditory brain-stem response (ABR waveform comprises a set of waves (labeled I–VII recorded with scalp electrodes over 10 ms after an auditory stimulation with a brief click sound. Quite often, the waves are fused (confluent and baseline-irregular and sloped, making wave latencies and wave amplitudes difficult to establish. In the present paper, we describe a method, labeled moving-minimum subtraction, based on digitization of the analog ABR waveform (154 data points/ms in order to achieve alignment of the ABR response to a straight baseline, often with clear baseline separation of waves and resolution of fused waves. Application of the new method to groups of patients showed marked differences in ABR waveforms between patients with schizophrenia versus patients with adult attention deficit/hyperactivity disorder versus healthy controls. The findings show promise regarding the possibility to identify ABR markers to be used as biomarkers as support for clinical diagnoses of these and other neuropsychiatric disorders. Keywords: auditory brain-stem response, digitization, moving-minimum subtraction method, baseline alignment, schizophrenia, ADHD

  16. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  17. Optical Sensors and Methods for Underwater 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Miquel Massot-Campos

    2015-12-01

    Full Text Available This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered.

  18. Long-code Signal Waveform Monitoring Method for Navigation Satellites%卫星导航长码信号波形监测方法

    Institute of Scientific and Technical Information of China (English)

    刘建成; 王宇; 宫磊; 徐晓燕

    2016-01-01

    Due to the weakness of signal,signal waveform monitoring for navigation satellites in orbit is one of the difficulties in satellite navigation signal quality monitoring research,so a signal waveform monitoring method for navigation satellites in orbit is pro⁃posed.Based on the Vernier sampling principle,a large⁃diameter parabolic antenna is used for in⁃orbit satellite signal collection.After in⁃itial phase and residual frequency elimination,accumulation and combination,a clear chip waveform is obtained.For civilian and long⁃code signals with the same code rate,the PN code phase bias can be determined.By using a large⁃diameter parabolic antenna for COM⁃PASS satellite tracking,the civilian and long⁃code chip waveforms of several COMPASS satellites in B1 band are obtained,and the PN code phase bias of the satellite signals are got.The results show that there is little difference between the civilian signal waveform and long⁃code signal waveform,but there is a code phase bias between them.%由于信号微弱,如何获得在轨导航卫星的清晰信号波形是卫星导航信号质量监测研究中的难点之一,为此提出了一种在轨导航卫星的信号波形监测方法。该方法基于Vernier采样原理,利用大口径抛物面天线对在轨卫星进行信号采集,经过消除初相和残余频率、累加平均和数据组合等处理,获得清晰的码片波形。对于相同码速率的民用信号和长码信号,可确定民用信号和长码信号的伪码相位偏差。利用大口径抛物面天线对北斗卫星进行跟踪,获得了多颗北斗卫星B1频点民用信号和长码信号的码片波形。结果表明,民用信号和长码信号的码片波形的轮廓差异较小,但伪码相位存在偏差。

  19. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    Science.gov (United States)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  20. Waveform synthesizer

    Science.gov (United States)

    Franks, Larry A.; Nelson, Melvin A.

    1981-01-01

    A method of producing optical and electrical pulses of desired shape. An optical pulse of arbitrary but defined shape illuminates one end of an array of optical fiber waveguides of differing lengths to time differentiate the input pulse. The optical outputs at the other end of the array are combined to form a synthesized pulse of desired shape.

  1. A New Method for Coronal Magnetic Field Reconstruction

    Science.gov (United States)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  2. Comparing 3D virtual methods for hemimandibular body reconstruction.

    Science.gov (United States)

    Benazzi, Stefano; Fiorenza, Luca; Kozakowski, Stephanie; Kullmer, Ottmar

    2011-07-01

    Reconstruction of fractured, distorted, or missing parts in human skeleton presents an equal challenge in the fields of paleoanthropology, bioarcheology, forensics, and medicine. This is particularly important within the disciplines such as orthodontics and surgery, when dealing with mandibular defects due to tumors, developmental abnormalities, or trauma. In such cases, proper restorations of both form (for esthetic purposes) and function (restoration of articulation, occlusion, and mastication) are required. Several digital approaches based on three-dimensional (3D) digital modeling, computer-aided design (CAD)/computer-aided manufacturing techniques, and more recently geometric morphometric methods have been used to solve this problem. Nevertheless, comparisons among their outcomes are rarely provided. In this contribution, three methods for hemimandibular body reconstruction have been tested. Two bone defects were virtually simulated in a 3D digital model of a human hemimandible. Accordingly, 3D digital scaffolds were obtained using the mirror copy of the unaffected hemimandible (Method 1), the thin plate spline (TPS) interpolation (Method 2), and the combination between TPS and CAD techniques (Method 3). The mirror copy of the unaffected hemimandible does not provide a suitable solution for bone restoration. The combination between TPS interpolation and CAD techniques (Method 3) produces an almost perfect-fitting 3D digital model that can be used for biocompatible custom-made scaffolds generated by rapid prototyping technologies.

  3. Gene Expression Network Reconstruction by LEP Method Using Microarray Data

    Directory of Open Access Journals (Sweden)

    Na You

    2012-01-01

    Full Text Available Gene expression network reconstruction using microarray data is widely studied aiming to investigate the behavior of a gene cluster simultaneously. Under the Gaussian assumption, the conditional dependence between genes in the network is fully described by the partial correlation coefficient matrix. Due to the high dimensionality and sparsity, we utilize the LEP method to estimate it in this paper. Compared to the existing methods, the LEP reaches the highest PPV with the sensitivity controlled at the satisfactory level. A set of gene expression data from the HapMap project is analyzed for illustration.

  4. Filtered Iterative Reconstruction (FIR) via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    CERN Document Server

    Gao, Hao

    2015-01-01

    This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Specifically, FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergenc...

  5. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    Science.gov (United States)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  6. Reconstruction and analysis of hybrid composite shells using meshless methods

    Science.gov (United States)

    Bernardo, G. M. S.; Loja, M. A. R.

    2017-02-01

    The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.

  7. Reconstruction and analysis of hybrid composite shells using meshless methods

    Science.gov (United States)

    Bernardo, G. M. S.; Loja, M. A. R.

    2017-06-01

    The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.

  8. Asymptotic approximation method of force reconstruction: Proof of concept

    Science.gov (United States)

    Sanchez, J.; Benaroya, H.

    2017-08-01

    An important problem in engineering is the determination of the system input based on the system response. This type of problem is difficult to solve as it is often ill-defined, and produces inaccurate or non-unique results. Current reconstruction techniques typically involve the employment of optimization methods or additional constraints to regularize the problem, but these methods are not without their flaws as they may be sub-optimally applied and produce inadequate results. An alternative approach is developed that draws upon concepts from control systems theory, the equilibrium analysis of linear dynamical systems with time-dependent inputs, and asymptotic approximation analysis. This paper presents the theoretical development of the proposed method. A simple application of the method is presented to demonstrate the procedure. A more complex application to a continuous system is performed to demonstrate the applicability of the method.

  9. An improved image reconstruction method for optical intensity correlation Imaging

    Science.gov (United States)

    Gao, Xin; Feng, Lingjie; Li, Xiyu

    2016-12-01

    The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.

  10. Simple Waveforms, Simply Described

    Science.gov (United States)

    Baker, John G.

    2008-01-01

    Since the first Lazarus Project calculations, it has been frequently noted that binary black hole merger waveforms are 'simple.' In this talk we examine some of the simple features of coalescence and merger waveforms from a variety of binary configurations. We suggest an interpretation of the waveforms in terms of an implicit rotating source. This allows a coherent description, of both the inspiral waveforms, derivable from post-Newtonian(PN) calculations, and the numerically determined merger-ringdown. We focus particularly on similarities in the features of various Multipolar waveform components Generated by various systems. The late-time phase evolution of most L these waveform components are accurately described with a sinple analytic fit. We also discuss apparent relationships among phase and amplitude evolution. Taken together with PN information, the features we describe can provide an approximate analytic description full coalescence wavefoRms. complementary to other analytic waveforns approaches.

  11. Interpolation in waveform space: enhancing the accuracy of gravitational waveform families using numerical relativity

    CERN Document Server

    Cannon, Kipp; Hanna, Chad; Keppel, Drew; Pfeiffer, Harald

    2012-01-01

    Matched-filtering for the identification of compact object mergers in gravitational-wave antenna data involves the comparison of the data stream to a bank of template gravitational waveforms. Typically the template bank is constructed from phenomenological waveform models since these can be evaluated for an arbitrary choice of physical parameters. Recently it has been proposed that singular value decomposition (SVD) can be used to reduce the number of templates required for detection. As we show here, another benefit of SVD is its removal of biases from the phenomenological templates along with a corresponding improvement in their ability to represent waveform signals obtained from numerical relativity (NR) simulations. Using these ideas, we present a method that calibrates a reduced SVD basis of phenomenological waveforms against NR waveforms in order to construct a new waveform approximant with improved accuracy and faithfulness compared to the original phenomenological model. The new waveform family is giv...

  12. Using waveform complexity in the search for transient gravitational wave events

    Science.gov (United States)

    Millhouse, Margaret; Littenberg, Tyson; Cornish, Neil; Kanner, Jonah; LIGO Collaboration

    2016-03-01

    Searches for short, unmodeled gravitational waves using ground based interferometers are impacted by transient noise artifacts, or ``glitches'', which can be difficult to distinguish from gravitational waves of astrophysical origin. The BayesWave algorithm presents a novel method of distinguishing glitches from short duration astrophysical signals by using waveform complexity to rank candidate events. In addition to identifying signals and glitches, BayesWave also provides robust waveform reconstruction with minimal assumptions. I will showcase the algorithm's glitch rejection capabilities, and discuss the performance of BayesWave during Advanced LIGO's first observational run.

  13. An Optimized Method for Terrain Reconstruction Based on Descent Images

    Directory of Open Access Journals (Sweden)

    Xu Xinchao

    2016-02-01

    Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.

  14. Local motion-compensated method for high-quality 3D coronary artery reconstruction.

    Science.gov (United States)

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-12-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method.

  15. Optical arbitrary waveform characterization using linear spectrograms.

    Science.gov (United States)

    Jiang, Zhi; Leaird, Daniel E; Long, Christopher M; Boppart, Stephen A; Weiner, Andrew M

    2010-08-01

    We demonstrate the first application of linear spectrogram methods based on electro-optic phase modulation to characterize optical arbitrary waveforms generated under spectral line-by-line control. This approach offers both superior sensitivity and self-referencing capability for retrieval of periodic high repetition rate optical arbitrary waveforms.

  16. Comparison of pulse phase and thermographic signal reconstruction processing methods

    Science.gov (United States)

    Oswald-Tranta, Beata; Shepard, Steven M.

    2013-05-01

    Active thermography data for nondestructive testing has traditionally been evaluated by either visual or numerical identification of anomalous surface temperature contrast in the IR image sequence obtained as the target sample cools in response to thermal stimulation. However, in recent years, it has been demonstrated that considerably more information about the subsurface condition of a sample can be obtained by evaluating the time history of each pixel independently. In this paper, we evaluate the capabilities of two such analysis techniques, Pulse Phase Thermography (PPT) and Thermographic Signal Reconstruction (TSR) using induction and optical flash excitation. Data sequences from optical pulse and scanned induction heating are analyzed with both methods. Results are evaluated in terms of signal-tobackground ratio for a given subsurface feature. In addition to the experimental data, we present finite element simulation models with varying flaw diameter and depth, and discuss size measurement accuracy and the effect of noise on detection limits and sensitivity for both methods.

  17. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  18. Efficient DPCA SAR imaging with fast iterative spectrum reconstruction method

    Institute of Scientific and Technical Information of China (English)

    FANG Jian; ZENG JinShan; XU ZongBen; ZHAO Yao

    2012-01-01

    The displaced phase center antenna (DPCA) technique is an effective strategy to achieve wide-swath synthetic aperture radar (SAR) imaging with high azimuth resolution.However,traditionally,it requires strict limitation of the pulse repetition frequency (PRF) to avoid non-uniform sampling.Otherwise,any deviation could bring serious ambiguity if the data are directly processed using a matched filter.To break this limitation,a recently proposed spectrum reconstruction method is capable of recovering the true spectrum from the nonuniform samples. However,the performance is sensitive to the selection of the PRF.Sparse regularization based imaging may provide a way to overcome this sensitivity. The existing time-domain method,however,requires a large-scale observation matrix to be built,which brings a high computational cost.In this paper,we propose a frequency domain method,called the iterative spectrum reconstruction method,through integration of the sparse regularization technique with spectrum analysis of the DPCA signal.By approximately expressing the observation in the frequency domain,which is realized via a series of decoupled linear operations,the method performs SAR imaging which is then not directly based on the observation matrix,which reduces the computational cost from O(N2) to O(NlogN) (where N is the number of range cells),and is therefore more efficient than the time domain method. The sparse regularization scheme,realized via a fast thresholding iteration,has been adopted in this method,which brings the robustness of the imaging process to the PRF selection.We provide a series of simulations and ground based experiments to demonstrate the high efficiency and robustness of the method.The simulations show that the new method is almost as fast as the traditional mono-channel algorithm,and works well almost independently of the PRF selection.Consequently,the suggested method can be accepted as a practical and efficient wide-swath SAR imaging technique.

  19. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  20. Statistics-based reconstruction method with high random-error tolerance for integral imaging.

    Science.gov (United States)

    Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing

    2015-10-01

    A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.

  1. Reconstruction of nonlinear wave propagation

    Science.gov (United States)

    Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie

    2013-04-23

    Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.

  2. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  3. Three-Dimensional Reconstruction from Cone-Beam Projections for Flat and Curved Detectors: Reconstruction Method Development.

    Science.gov (United States)

    Hu, Hui

    This dissertation is principally concerned with improving the performance of a prototype image-intensifier -based cone-beam volume computed tomography system by removing or partially removing two of its restricting factors, namely, the inaccuracy of current cone-beam reconstruction algorithm and the image distortion associated with the curved detecting surface of the image intensifier. To improve the accuracy of cone-beam reconstruction, first, the currently most accurate and computationally efficient cone-beam reconstruction method, the Feldkamp algorithm, is investigated by studying the relation of an original unknown function with its Feldkamp estimate. From this study, a partial knowledge on the unknown function can be derived in the Fourier domain from its Feldkamp estimate. Then, based on the Gerchberg-Papoulis algorithm, a modified iterative algorithm efficiently incorporating the Fourier knowledge as well as the a priori spatial knowledge on the unknown function is devised and tested to improve the cone-beam reconstruction accuracy by postprocessing the Feldkamp estimate. Two methods are developed to remove the distortion associated with the curved surface of image intensifier. A calibrating method based on a rubber-sheet remapping is designed and implemented. As an alternative, the curvature can be considered in the reconstruction algorithm. As an initial effort along this direction, a generalized convolution -backprojection reconstruction algorithm for fan-beam and any circular detector arrays is derived and studied.

  4. Pseudo waveform inversion

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Chang Soo; Park, Keun Pil [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of); Suh, Jung Hee; Hyun, Byung Koo; Shin, Sung Ryul [Seoul National University, Seoul (Korea, Republic of)

    1995-12-01

    The seismic reflection exploration technique which is one of the geophysical methods for oil exploration became effectively to image the subsurface structure with rapid development of computer. However, the imagining of subsurface based on the conventional data processing is almost impossible to obtain the information on physical properties of the subsurface such as velocity and density. Since seismic data are implicitly function of velocities of subsurface, it is necessary to develop the inversion method that can delineate the velocity structure using seismic topography and waveform inversion. As a tool to perform seismic inversion, seismic forward modeling program using ray tracing should be developed. In this study, we have developed the algorithm that calculate the travel time of the complex geologic structure using shooting ray tracing by subdividing the geologic model into blocky structure having the constant velocity. With the travel time calculation, the partial derivatives of travel time can be calculated efficiently without difficulties. Since the current ray tracing technique has a limitation to calculate the travel times for extremely complex geologic model, our aim in the future is to develop the powerful ray tracer using the finite element technique. After applying the pseudo waveform inversion to the seismic data of Korea offshore, we can obtain the subsurface velocity model and use the result in bring up the quality of the seismic data processing. If conventional seismic data processing and seismic interpretation are linked with this inversion technique, the high quality of seismic data processing can be expected to image the structure of the subsurface. Future research area is to develop the powerful ray tracer of ray tracing which can calculate the travel times for the extremely complex geologic model. (author). 39 refs., 32 figs., 2 tabs.

  5. Reconstruction

    Directory of Open Access Journals (Sweden)

    Stefano Zurrida

    2011-01-01

    Full Text Available Breast cancer is the most common cancer in women. Primary treatment is surgery, with mastectomy as the main treatment for most of the twentieth century. However, over that time, the extent of the procedure varied, and less extensive mastectomies are employed today compared to those used in the past, as excessively mutilating procedures did not improve survival. Today, many women receive breast-conserving surgery, usually with radiotherapy to the residual breast, instead of mastectomy, as it has been shown to be as effective as mastectomy in early disease. The relatively new skin-sparing mastectomy, often with immediate breast reconstruction, improves aesthetic outcomes and is oncologically safe. Nipple-sparing mastectomy is newer and used increasingly, with better acceptance by patients, and again appears to be oncologically safe. Breast reconstruction is an important adjunct to mastectomy, as it has a positive psychological impact on the patient, contributing to improved quality of life.

  6. An MSK Radar Waveform

    Science.gov (United States)

    Quirk, Kevin J.; Srinivasan, Meera

    2012-01-01

    The minimum-shift-keying (MSK) radar waveform is formed by periodically extending a waveform that separately modulates the in-phase and quadrature- phase components of the carrier with offset pulse-shaped pseudo noise (PN) sequences. To generate this waveform, a pair of periodic PN sequences is each passed through a pulse-shaping filter with a half sinusoid impulse response. These shaped PN waveforms are then offset by half a chip time and are separately modulated on the in-phase and quadrature phase components of an RF carrier. This new radar waveform allows an increase in radar resolution without the need for additional spectrum. In addition, it provides self-interference suppression and configurable peak sidelobes. Compared strictly on the basis of the expressions for delay resolution, main-lobe bandwidth, effective Doppler bandwidth, and peak ambiguity sidelobe, it appears that bi-phase coded (BPC) outperforms the new MSK waveform. However, a radar waveform must meet certain constraints imposed by the transmission and reception of the modulation, as well as criteria dictated by the observation. In particular, the phase discontinuity of the BPC waveform presents a significant impediment to the achievement of finer resolutions in radar measurements a limitation that is overcome by using the continuous phase MSK waveform. The phase continuity, and the lower fractional out-of-band power of MSK, increases the allowable bandwidth compared with BPC, resulting in a factor of two increase in the range resolution of the radar. The MSK waveform also has been demonstrated to have an ambiguity sidelobe structure very similar to BPC, where the sidelobe levels can be decreased by increasing the length of the m-sequence used in its generation. This ability to set the peak sidelobe level is advantageous as it allows the system to be configured to a variety of targets, including those with a larger dynamic range. Other conventionally used waveforms that possess an even greater

  7. Yeast ancestral genome reconstructions: the possibilities of computational methods II.

    Science.gov (United States)

    Chauve, Cedric; Gavranovic, Haris; Ouangraoua, Aida; Tannier, Eric

    2010-09-01

    Since the availability of assembled eukaryotic genomes, the first one being a budding yeast, many computational methods for the reconstruction of ancestral karyotypes and gene orders have been developed. The difficulty has always been to assess their reliability, since we often miss a good knowledge of the true ancestral genomes to compare their results to, as well as a good knowledge of the evolutionary mechanisms to test them on realistic simulated data. In this study, we propose some measures of reliability of several kinds of methods, and apply them to infer and analyse the architectures of two ancestral yeast genomes, based on the sequence of seven assembled extant ones. The pre-duplication common ancestor of S. cerevisiae and C. glabrata has been inferred manually by Gordon et al. (Plos Genet. 2009). We show why, in this case, a good convergence of the methods is explained by some properties of the data, and why results are reliable. In another study, Jean et al. (J. Comput Biol. 2009) proposed an ancestral architecture of the last common ancestor of S. kluyveri, K. thermotolerans, K. lactis, A. gossypii, and Z. rouxii inferred by a computational method. In this case, we show that the dataset does not seem to contain enough information to infer a reliable architecture, and we construct a higher resolution dataset which gives a good reliability on a new ancestral configuration.

  8. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  9. Investigating 3d Reconstruction Methods for Small Artifacts

    Science.gov (United States)

    Evgenikou, V.; Georgopoulos, A.

    2015-02-01

    Small artifacts have always been a real challenge when it comes to 3D modelling. They usually present severe difficulties for their 3D reconstruction. Lately, the demand for the production of 3D models of small artifacts, especially in the cultural heritage domain, has dramatically increased. As with many cases, there are no specifications and standards for this task. This paper investigates the efficiency of several mainly low cost methods for 3D model production of such small artifacts. Moreover, the material, the color and the surface complexity of these objects id also investigated. Both image based and laser scanning methods have been considered as alternative data acquisition methods. The evaluation has been confined to the 3D meshes, as texture depends on the imaging properties, which are not investigated in this project. The resulting meshes have been compared to each other for their completeness, and accuracy. It is hoped that the outcomes of this investigation will be useful to researchers who are planning to embark into mass production of 3D models of small artifacts.

  10. INVESTIGATING 3D RECONSTRUCTION METHODS FOR SMALL ARTIFACTS

    Directory of Open Access Journals (Sweden)

    V. Evgenikou

    2015-02-01

    Full Text Available Small artifacts have always been a real challenge when it comes to 3D modelling. They usually present severe difficulties for their 3D reconstruction. Lately, the demand for the production of 3D models of small artifacts, especially in the cultural heritage domain, has dramatically increased. As with many cases, there are no specifications and standards for this task. This paper investigates the efficiency of several mainly low cost methods for 3D model production of such small artifacts. Moreover, the material, the color and the surface complexity of these objects id also investigated. Both image based and laser scanning methods have been considered as alternative data acquisition methods. The evaluation has been confined to the 3D meshes, as texture depends on the imaging properties, which are not investigated in this project. The resulting meshes have been compared to each other for their completeness, and accuracy. It is hoped that the outcomes of this investigation will be useful to researchers who are planning to embark into mass production of 3D models of small artifacts.

  11. SOUND-SPEED AND ATTENUATION IMAGING OF BREAST TISSUE USING WAVEFORM TOMOGRAPHY OF TRANSMISSION ULTRASOUND DATA

    Energy Technology Data Exchange (ETDEWEB)

    HUANG, LIANJIE [Los Alamos National Laboratory; PRATT, R. GERHARD [Los Alamos National Laboratory; DURIC, NEB [Los Alamos National Laboratory; LITTRUP, PETER [Los Alamos National Laboratory

    2007-01-25

    Waveform tomography results are presented from 800 kHz ultrasound transmission scans of a breast phantom, and from an in vivo ultrasound breast scan: significant improvements are demonstrated in resolution over time-of-flight reconstructions. Quantitative reconstructions of both sound-speed and inelastic attenuation are recovered. The data were acquired in the Computed Ultrasound Risk Evaluation (CURE) system, comprising a 20 cm diameter solid-state ultrasound ring array with 256 active, non-beamforming transducers. Waveform tomography is capable of resolving variations in acoustic properties at sub-wavelength scales. This was verified through comparison of the breast phantom reconstructions with x-ray CT results: the final images resolve variations in sound speed with a spatial resolution close to 2 mm. Waveform tomography overcomes the resolution limit of time-of-flight methods caused by finite frequency (diffraction) effects. The method is a combination of time-of-flight tomography, and 2-D acoustic waveform inversion of the transmission arrivals in ultrasonic data. For selected frequency components of the waveforms, a finite-difference simulation of the visco-acoustic wave equation is used to compute synthetic data in the current model, and the data residuals are formed by subtraction. The residuals are used in an iterative, gradient-based scheme to update the sound-speed and attenuation model to produce a reduced misfit to the data. Computational efficiency is achieved through the use of time-reversal of the data residuals to construct the model updates. Lower frequencies are used first, to establish the long wavelength components of the image, and higher frequencies are introduced later to provide increased resolution.

  12. An Improved Total Variation Minimization Method Using Prior Images and Split-Bregman Method in CT Reconstruction

    Science.gov (United States)

    2016-01-01

    Compressive Sensing (CS) theory has great potential for reconstructing Computed Tomography (CT) images from sparse-views projection data and Total Variation- (TV-) based CT reconstruction method is very popular. However, it does not directly incorporate prior images into the reconstruction. To improve the quality of reconstructed images, this paper proposed an improved TV minimization method using prior images and Split-Bregman method in CT reconstruction, which uses prior images to obtain valuable previous information and promote the subsequent imaging process. The images obtained asynchronously were registered via Locally Linear Embedding (LLE). To validate the method, two studies were performed. Numerical simulation using an abdomen phantom has been used to demonstrate that the proposed method enables accurate reconstruction of image objects under sparse projection data. A real dataset was used to further validate the method. PMID:27689076

  13. Early-photon guided reconstruction method for time-domain fluorescence lifetime tomography

    Institute of Scientific and Technical Information of China (English)

    Lin Zhang; Chuangjian Cai; Yanlu Lv; Jianwen Luo

    2016-01-01

    A reconstruction method guided by early-photon fluorescence yield tomography is proposed for time-domain fluorescence lifetime tomography (FLT) in this study.The method employs the early-arriving photons to reconstruct a fluorescence yield map,which is utilized as a priori information to reconstruct the FLT via all the photons along the temporal-point spread functions.Phantom experiments demonstrate that,compared with the method using all the photons for reconstruction of fluorescence yield and lifetime maps,the proposed method can achieve higher spatial resolution and reduced crosstalk between different targets without sacrificing the quantification accuracy of lifetime and contrast between heterogeneous targets.

  14. A reconstruction method of porous media integrating soft data with hard data

    Institute of Scientific and Technical Information of China (English)

    LU DeTang; ZHANG Ting; YANG JiaQing; LI DaoLun; KONG XiangYan

    2009-01-01

    The three-dimensional reconstruction of porous media is of great significance to the research of mechanisms of fluid flow. The real three-dimensional structural data of porous media are helpful to describe the irregular topologic structures in porous media. The reconstruction of porous media will be inaccurate while only hard data or no conditional data are available. Reconstructed results can be more accurate, using soft data during reconstruction. Integrating soft data with hard data, a method based on multiple-point geostatistics (MPS) is proposed to reconstruct three-dimensional structures of porous media. The variogram curves and permeability, computed by lattice Boltzmann method (LBM), of the reconstructed images and the target image obtained from real volume data were compared, showing that the structural characteristics of reconstructed porous media using both soft data and hard data as conditional data are most similar to those of real volume data.

  15. A method for the joint inversion of geodetic and seismic waveform data using ABIC: application to the 1997 Manyi, Tibet, earthquake

    Science.gov (United States)

    Funning, Gareth J.; Fukahata, Yukitoshi; Yagi, Yuji; Parsons, Barry

    2014-03-01

    Geodetic imaging data and seismic waveform data have complementary strengths when considering the modelling of earthquakes. The former, particularly modern space geodetic techniques such as Interferometric Synthetic Aperture Radar (InSAR), permit high spatial density of observation and thus fine resolution of the spatial pattern of fault slip; the latter provide precise and accurate timing information, and thus the ability to resolve how that fault slip varies over time. In order to harness these complementary strengths, we propose a method through which the two data types can be combined in a joint inverse model for the evolution of slip on a specified fault geometry. We present here a derivation of Akaike's Bayesian Information Criterion (ABIC) for the joint inversion of multiple data sets that explicitly deals with the problem of objectively estimating the relative weighting between data sets, as well as the optimal influence of model smoothness constraints in space and time. We demonstrate our ABIC inversion scheme by inverting InSAR displacements and teleseismic waveform data for the 1997 Manyi, Tibet, earthquake. We test, using a simplified fault geometry, three cases-InSAR data inverted alone, vertical component teleseismic broad-band waveform data inverted alone and a joint inversion of both data sets. The InSAR-only model and seismic-only model differ significantly in the distribution of slip on the fault plane that they predict. The joint-inversion model, however, has not only a similar distribution of slip and fit to the InSAR data in the InSAR-only model, suggesting that those data provide the stronger control on the pattern of slip, but is also able to fit the seismic data at a minimal degradation of fit when compared with the seismic-only model. The rupture history of the preferred, joint-inversion model, indicates bilateral rupture for the first 20 s of the earthquake, followed by a further 25 s of westward unilateral rupture afterwards, with slip

  16. Sparse Frequency Waveform Design for Radar-Embedded Communication

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2016-01-01

    Full Text Available According to the Tag application with function of covert communication, a method for sparse frequency waveform design based on radar-embedded communication is proposed. Firstly, sparse frequency waveforms are designed based on power spectral density fitting and quasi-Newton method. Secondly, the eigenvalue decomposition of the sparse frequency waveform sequence is used to get the dominant space. Finally the communication waveforms are designed through the projection of orthogonal pseudorandom vectors in the vertical subspace. Compared with the linear frequency modulation waveform, the sparse frequency waveform can further improve the bandwidth occupation of communication signals, thus achieving higher communication rate. A certain correlation exists between the reciprocally orthogonal communication signals samples and the sparse frequency waveform, which guarantees the low SER (signal error rate and LPI (low probability of intercept. The simulation results verify the effectiveness of this method.

  17. Guided Wave Tomography Based on Full-Waveform Inversion.

    Science.gov (United States)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2016-02-29

    In this paper, a guided wave tomography method based on Full Waveform Inversion (FWI) is developed for accurate and high resolu- tion reconstruction of the remaining wall thickness in isotropic plates. The forward model is computed in the frequency domain by solving a full-wave equation in a two-dimensional acoustic model, accounting for higher order eects such as diractions and multiple scattering. Both numerical simulations and experiments were carried out to obtain the signals of a dispersive guided mode propagating through defects. The inversion was based on local optimization of a waveform mist func- tion between modeled and measured data, and was applied iteratively to discrete frequency components from low to high frequencies. The resulting wave velocity maps were then converted to thickness maps by the dispersion characteristics of selected guided modes. The results suggest that the FWI method is capable to reconstruct the thickness map of a irregularly shaped defect accurately on a 10 mm thick plate with the thickness error within 0.5 mm.

  18. Gauss-Newton Method Full Waveform Inversion Based on GPU Acceleration%基于GPU加速的高斯牛顿法全波形反演

    Institute of Scientific and Technical Information of China (English)

    邓哲; 黄慧明; 杨艳

    2016-01-01

    The Gauss-Newton method for seismic full waveform inversion is extensive computational and time-consuming. The fast parallel platform of CUDA is applied to speed up the program on graphics processing unit (GPU). The time consuming parts of Gauss-Newton method full waveform inversion are waveform forward modeling and matrix multiplication, and they all meet the requirements of parallelism in the algorithm. For the acceleration of the wave forward modeling, we study and implement the finite-difference time-domain (FDTD) method based on CUDA platform; and for matrix multiplication, CUBLAS library with strong ability of calculation is directly used. Implementing the code of different model size on Personal Computer (PC) with GTX650ti GPU to test the speedups, the test shows that the GPU-based code is 10-30 times faster than the CPU-based code and it will perform faster when the model size is bigger. Numerical test of the Overthrust velocity model indicates the time cost is never a question to Guass-Newton method full waveform inversion.%针对高斯牛顿法地震全波形反演计算量大、计算速度慢的问题,采用图形处理器(GPU)对其加速。高斯牛顿法全波形反演耗时主要集中在波形正演模拟和矩阵乘法计算两个方面,而波形正演算法和矩阵乘法计算在算法特性上都满足并行性的要求。对于波形正演模拟的加速,研究并实现了基于CUDA平台的时域有限差分(FDTD)正演算法。对于矩阵乘法的加速,直接使用计算能力很强的CUB⁃LAS库来完成计算。在台式PC上对不同模型大小的反演区域做合成数据反演,所用显卡型号为GTX650ti,程序速度提升10~30倍,且随着模型增大,程序的加速比将进一步提高。二维Overthrust截取模型反演算例表明时间成本已经不再是影响高斯牛顿法全波形反演发展的主要问题。

  19. Anisotropic seismic-waveform inversion: Application to a seismic velocity model from Eleven-Mile Canyon in Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gao, Kai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Huang, Lianjie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sabin, Andrew [Geothermal Program Office, China Lake, CA (United States)

    2016-03-31

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquired at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.

  20. DSP Based Waveform Generator

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The DSP Based Waveform Generator is used for CSR Control system to control special controlled objects, such as the pulsed power supply for magnets, RF system, injection and extraction synchronization, global CSR synchronization etc. This intelligent controller based on 4800 MIPS DSP and 256M SDRAM technology will supply highly stable and highly accurate reference waveform used by the power supply of magnets. The specifications are as follows:

  1. Bilateral Waveform Similarity Overlap-and-Add Based Packet Loss Concealment for Voice over IP

    Directory of Open Access Journals (Sweden)

    J.F. Yeh

    2013-08-01

    Full Text Available This paper invested a bilateral waveform similarity overlap-and-add algorithm for voice packet lost. Since Packet lost will cause the semantic misunderstanding, it has become one of the most essential problems in speech communication. This investment is based on waveform similarity measure using overlap-and-Add algorithm and provides the bilateral information to enhance the speech signal reconstruction. Traditionally, it has been improved that waveform similarity overlap-and-add (WSOLA technique is an effective algorithm to deal with packet loss concealment (PLC for real-time time communication. WSOLA algorithm is widely applied to deal with the length adaptation and packet loss concealment of speech signal. Time scale modification of audio signal is one of the most essential research topics in data communication, especially in voice of IP (VoIP. Herein, the proposed the bilateral WSOLA (BWSOLA that is derived from WSOLA. Instead of only exploitation one direction speech data, the proposed method will reconstruct the lost voice data according to the preceding and cascading data. The related algorithms have been developed to achieve the optimal reconstructing estimation. The experimental results show that the quality of the reconstructed speech signal of the bilateral WSOLA is much better compared to the standard WSOLA and GWSOLA on different packet loss rate and length using the metrics PESQ and MOS. The significant improvement is obtained by bilateral information and proposed method. The proposed bilateral waveform similarity overlap-and-add (BWSOLA outperforms the traditional approaches especially in the long duration data loss.

  2. Arbitrary waveform generator

    Science.gov (United States)

    Griffin, Maurice; Sugawara, Glen

    1995-02-01

    A system for storing an arbitrary waveform on nonvolatile random access memory (NVRAM) device and generating an analog signal using the NVRAM device is described. A central processing unit is used to synthesize an arbitrary waveform and create a digital representation of the waveform and transfer the digital representation to a microprocessor which, in turn, writes the digital data into an NVRAM device which has been mapped into a portion of the microprocessor address space. The NVRAM device is removed from address space and placed into an independent waveform generation unit. In the waveform generation unit, an address clock provides an address timing signal and a cycle clock provides a transmit signal. Both signals are applied to an address generator. When both signals are present, the address generator generates and transmits to the NVRAM device a new address for each cycle of the address timing signal. In response to each new address generated, the NVRAM devices provides a digital output which is applied to a digital to analog converter. The converter produces a continuous analog output which is smoothed by a filter to produce the arbitrary waveform.

  3. Arterial waveform analysis.

    Science.gov (United States)

    Esper, Stephen A; Pinsky, Michael R

    2014-12-01

    The bedside measurement of continuous arterial pressure values from waveform analysis has been routinely available via indwelling arterial catheterization for >50 years. Invasive blood pressure monitoring has been utilized in critically ill patients, in both the operating room and critical care units, to facilitate rapid diagnoses of cardiovascular insufficiency and monitor response to treatments aimed at correcting abnormalities before the consequences of either hypo- or hypertension are seen. Minimally invasive techniques to estimate cardiac output (CO) have gained increased appeal. This has led to the increased interest in arterial waveform analysis to provide this important information, as it is measured continuously in many operating rooms and intensive care units. Arterial waveform analysis also allows for the calculation of many so-called derived parameters intrinsically created by this pulse pressure profile. These include estimates of left ventricular stroke volume (SV), CO, vascular resistance, and during positive-pressure breathing, SV variation, and pulse pressure variation. This article focuses on the principles of arterial waveform analysis and their determinants, components of the arterial system, and arterial pulse contour. It will also address the advantage of measuring real-time CO by the arterial waveform and the benefits to measuring SV variation. Arterial waveform analysis has gained a large interest in the overall assessment and management of the critically ill and those at a risk of hemodynamic deterioration.

  4. Improved retracking algorithm for oceanic altimeter waveforms

    Institute of Scientific and Technical Information of China (English)

    Lifeng Bao; Yang Lu; Yong Wang

    2009-01-01

    Over the deep oceans without land/ice interference, the waveforms created by the return altimeter pulse generally follow the ocean model of Brown, and the corresponding range can be properly determined using the result from an onboard tracker. In the case of com-plex altimeter waveforms corrupted due to a variety of reasons, the processor on the satellite cannot properly determine the center of the leading edge, and range observations can be in error. As an efficacious method to improve the precision of those altimeter observations with complex waveforms, waveform retracking is required to reprocess the original returning pulse. Based on basic altimeter theory and the geometric feature of altimeter waveforms, we developed a new altimeter waveform retracker, which is valid for all altimeter wave-forms once there exists a reasonable returning signal. The performances of the existing Beta-5 retracker, threshold retracker, improved threshold retracker, and the new retracker are assessed in the experimental regions (China Seas and its adjacent regions), and the improvements in the accuracy of sea surface height are investigated by the difference between retracked altimeter observations and ref-erenced geoid. The comparisons denote that the new algorithm gives the best performance in both the open ocean and coastal regions. Also, the new retracker presents a uniform performance in the whole test region. Besides, there is a significant improvement in the short-wavelength precision and the spatial resolution of sea surface height after retracking process.

  5. Compact high order finite volume method on unstructured grids III: Variational reconstruction

    Science.gov (United States)

    Wang, Qian; Ren, Yu-Xin; Pan, Jianhua; Li, Wanai

    2017-05-01

    This paper presents a variational reconstruction for the high order finite volume method in solving the two-dimensional Navier-Stokes equations on arbitrary unstructured grids. In the variational reconstruction, an interfacial jump integration is defined to measure the jumps of the reconstruction polynomial and its spatial derivatives on each cell interface. The system of linear equations to determine the reconstruction polynomials is derived by minimizing the total interfacial jump integration in the computational domain using the variational method. On each control volume, the derived equations are implicit relations between the coefficients of the reconstruction polynomials defined on a compact stencil involving only the current cell and its direct face-neighbors. The reconstruction and time integration coupled iteration method proposed in our previous paper is used to achieve high computational efficiency. A problem-independent shock detector and the WBAP limiter are used to suppress non-physical oscillations in the simulation of flow with discontinuities. The advantages of the finite volume method using the variational reconstruction over the compact least-squares finite volume method proposed in our previous papers are higher accuracy, higher computational efficiency, more flexible boundary treatment and non-singularity of the reconstruction matrix. A number of numerical test cases are solved to verify the accuracy, efficiency and shock-capturing capability of the finite volume method using the variational reconstruction.

  6. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  7. A simulation of portable PET with a new geometric image reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456 8611 (Japan): Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474 8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644 0023 (Japan)

    2006-12-20

    A new method is proposed for three-dimensional positron emission tomography image reconstruction. The method uses the elementary geometric property of line of response whereby two lines of response, which originate from radioactive isotopes in the same position, lie within a few millimeters distance of each other. The method differs from the filtered back projection method and the iterative reconstruction method. The method is applied to a simulation of portable positron emission tomography.

  8. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    Science.gov (United States)

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images.

  9. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Science.gov (United States)

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  10. On the 3D reconstruction of diatom frustules : a novel method, applications, and limitations

    NARCIS (Netherlands)

    Mansilla, Catalina; Novais, Maria Helena; Faber, Enne; Martinez-Martinez, Diego; De Hosson, J. Th.

    2016-01-01

    Because of the importance of diatoms and the lack of information about their third dimension, a new method for the 3D reconstruction is explored, based on digital image correlation of several scanning electron microscope images. The accuracy of the method to reconstruct both centric and pennate (sym

  11. Noise robustness of a combined phase retrieval and reconstruction method for phase-contrast tomography

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis

    2016-01-01

    Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express 21, 12185 (2013) [CrossRef], and preliminary results demonstrated improved...... is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude....

  12. Porous media microstructure reconstruction using pixel-based and object-based simulated annealing: comparison with other reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)

    2008-07-01

    The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)

  13. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  14. Electrical impedance tomography method for reconstruction of biological tissues with continuous plane-stratification.

    Science.gov (United States)

    Dolgin, M; Einziger, P D

    2006-01-01

    A novel electrical impedance tomography method is introduced for reconstruction of layered biological tissues with continuous plane-stratification. The algorithm implements the recently proposed reconstruction scheme for piecewise constant conductivity profiles, based on an improved Prony method in conjunction with Legendre polynomial expansion (LPE). It is shown that the proposed algorithm is capable of successfully reconstructing continuous conductivity profiles with moderate (WKB) slop. Features of the presented reconstruction scheme include, an inherent linearity, achieved by the linear LPE transform, a locality feature, assigning analytically to each spectral component a local electrical impedance associated with a unique location, and effective performance even in the presence of noisy measurements.

  15. On multigrid methods for image reconstruction from projections

    Energy Technology Data Exchange (ETDEWEB)

    Henson, V.E.; Robinson, B.T. [Naval Postgraduate School, Monterey, CA (United States); Limber, M. [Simon Fraser Univ., Burnaby, British Columbia (Canada)

    1994-12-31

    The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.

  16. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  17. Source reconstruction for neutron coded-aperture imaging: A sparse method.

    Science.gov (United States)

    Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang

    2017-08-01

    Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.

  18. Tsunami simulation method initiated from waveforms observed by ocean bottom pressure sensors for real-time tsunami forecast; Applied for 2011 Tohoku Tsunami

    Science.gov (United States)

    Tanioka, Yuichiro

    2017-04-01

    After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami

  19. Reconstruction method for data protection in telemedicine systems

    Science.gov (United States)

    Buldakova, T. I.; Suyatinov, S. I.

    2015-03-01

    In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.

  20. Novel method for hit-position reconstruction using voltage signals in plastic scintillators and its application to Positron Emission Tomography

    CERN Document Server

    Raczynski, L; Kowalski, P; Wislicki, W; Bednarski, T; Bialas, P; Czerwinski, E; Kaplon, L; Kochanowski, A; Korcyl, G; Kowal, J; Kozik, T; Krzemien, W; Kubicz, E; Molenda, M; Moskal, I; Niedzwiecki, Sz; Palka, M; Pawlik-Niedzwiecka, M; Rudy, Z; Salabura, P; Sharma, N G; Silarski, M; Slomski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Zielinski, M; Zon, N

    2014-01-01

    Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511~keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four threshold...

  1. Multiples waveform inversion

    KAUST Repository

    Zhang, D. L.

    2013-01-01

    To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.

  2. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...... be based on a theoretical analysis of the underlying inverse problem....

  3. Magnetic Field Gradient Waveform Monitoring for Magnetic Resonance

    Science.gov (United States)

    Han, Hui

    Linear magnetic field gradients have played a central role in Magnetic Resonance Imaging (MRI) since Fourier Transform MRI was proposed three decades ago. Their primary function is to encode spatial information into MR signals. Magnetic field gradients are also used to sensitize the image contrast to coherent and/or incoherent motion, to selectively enhance an MR signal, and to minimize image artifacts. Modern MR imaging techniques increasingly rely on the implementation of complex gradient waveforms for the manipulation of spin dynamics. However, gradient system infidelities caused by eddy currents, gradient amplifier imperfections and group delays, often result in image artifacts and other errors (e.g., phase and intensity errors). This remains a critical problem for a wide range of MRI techniques on modern commercial systems, but is of particular concern for advanced MRI pulse sequences. Measuring the real magnetic field gradients, i.e., characterizing eddy currents, is critical to addressing and remedying this problem. Gradient measurement and eddy current calibration are therefore a general topic of importance to the science of MRI. The Magnetic Field Gradient Monitor (MFGM) idea was proposed and developed specifically to meet these challenges. The MFGM method is the heart of this thesis. MFGM methods permit a variety of magnetic field gradient problems to be investigated and systematically remedied. Eddy current effects associated with MR compatible metallic pressure vessels were analyzed, simulated, measured and corrected. The appropriate correction of eddy currents may enable most MR/MRI applications with metallic pressure vessels. Quantitative imaging (1D/2D) with model pressure vessels was successfully achieved by combining image reconstruction with MFGM determined gradient waveform behaviour. Other categories of MR applications with metallic vessels, including diffusion measurement and spin echo SPI T2 mapping, cannot be realized solely by MFGM guided

  4. L1/2 regularization based numerical method for effective reconstruction of bioluminescence tomography

    Science.gov (United States)

    Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin

    2014-05-01

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  5. An Iterative Method for Parallel MRI SENSE-based Reconstruction in the Wavelet Domain

    CERN Document Server

    Chaari, Lotfi; Ciuciu, Philippe; Benazza-Benyahia, Amel

    2009-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some MRI applications, parallel MRI (pMRI) acquisition techniques with multiple coils acquisition have emerged since the early 1990s as powerful 3D imaging methods that allow faster acquisition of reduced Field of View (FOV) images. In these techniques, the full FOV image has to be reconstructed from the resulting acquired undersampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSE method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity maps. In this paper, we aim at achieving good reconstructed image quality when using low magnetic field and high reduction factor. Under these severe experimental conditions, neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this aim, we present a novel method for SENSE-based reconstruction whi...

  6. Modified convolution method to reconstruct particle hologram with an elliptical Gaussian beam illumination.

    Science.gov (United States)

    Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa

    2013-05-20

    Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.

  7. Workflows for Full Waveform Inversions

    Science.gov (United States)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  8. A Novel Parallel Method for Speckle Masking Reconstruction Using the OpenMP

    Science.gov (United States)

    Li, Xuebao; Zheng, Yanfang

    2016-08-01

    High resolution reconstruction technology is developed to help enhance the spatial resolution of observational images for ground-based solar telescopes, such as speckle masking. Near real-time reconstruction performance is achieved on a high performance cluster using the Message Passing Interface (MPI). However, much time is spent in reconstructing solar subimages in such a speckle reconstruction. We design and implement a novel parallel method for speckle masking reconstruction of solar subimage on a shared memory machine using the OpenMP. Real tests are performed to verify the correctness of our codes. We present the details of several parallel reconstruction steps. The parallel implementation between various modules shows a great speed increase as compared to single thread serial implementation, and a speedup of about 2.5 is achieved in one subimage reconstruction. The timing result for reconstructing one subimage with 256×256 pixels shows a clear advantage with greater number of threads. This novel parallel method can be valuable in real-time reconstruction of solar images, especially after porting to a high performance cluster.

  9. Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography

    Science.gov (United States)

    Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie

    2017-03-01

    Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.

  10. An Improved Method for Power-Line Reconstruction from Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Bo Guo

    2016-01-01

    Full Text Available This paper presents a robust algorithm to reconstruct power-lines using ALS technology. Point cloud data are automatically classified into five target classes before reconstruction. In order to improve upon the defaults of only using the local shape properties of a single power-line span in traditional methods, the distribution properties of power-line group between two neighbor pylons and contextual information of related pylon objects are used to improve the reconstruction results. First, the distribution properties of power-line sets are detected using a similarity detection method. Based on the probability of neighbor points belonging to the same span, a RANSAC rule based algorithm is then introduced to reconstruct power-lines through two important advancements: reliable initial parameters fitting and efficient candidate sample detection. Our experiments indicate that the proposed method is effective for reconstruction of power-lines from complex scenarios.

  11. Accelerated gradient methods for total-variation-based CT image reconstruction

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian

    2011-01-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is very well suited for images with piecewise nearly constant regions. Computationally, however, TV-based....... In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former...... incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...

  12. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  13. Reconstruction of floodplain sedimentation rates: a combination of methods to optimize estimates

    NARCIS (Netherlands)

    Hobo, N.; Makaske, B.; Middelkoop, H.; Wallinga, J.

    2010-01-01

    Reconstruction of overbank sedimentation rates over the past decades gives insight into fl oodplain dynamics, and thereby provides a basis for effi cient and sustainable fl oodplain management. We compared the results of four independent reconstruction methods – optically stimulated luminescence (OS

  14. Simple method of modelling of digital holograms registering and their optical reconstruction

    Science.gov (United States)

    Evtikhiev, N. N.; Cheremkhin, P. A.; Krasnov, V. V.; Kurbatova, E. A.; Molodtsov, D. Yu; Porshneva, L. A.; Rodin, V. G.

    2016-08-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted.

  15. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  16. Deep Learning Methods for Particle Reconstruction in the HGCal

    CERN Document Server

    Arzi, Ofir

    2017-01-01

    The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure \\ref{fig:cms})\\cite{Contardo:2020886}. It's goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.

  17. Skin sparing mastectomy: Technique and suggested methods of reconstruction

    Directory of Open Access Journals (Sweden)

    Ahmed M. Farahat

    2014-09-01

    Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through preservation of the skin envelope of the breast when ever indicated. Our patients can benefit from safe surgery and have good cosmetic outcomeby applying different reconstructive techniques.

  18. Study on the Method for Obtaining Acceleration Waveform Records from Velocity Type Seismograms of the Digital Seismograph Network

    Institute of Scientific and Technical Information of China (English)

    Yao Lanyu; Nie Yongan; Zhao Jinghua; Bian Zhenfu

    2004-01-01

    The authors proposed a method for obtaining high-quality acceleration seismograms from velocity type seismograms of digital Seismographic network, and took as an example the analysis and processing of the seismograms of a same earthquake that was simultaneously recorded by velocity seismograph CTS1-EDAS24 and strong motion seismograph EST-Q4128installed in Jixian Station, Tianjin. The calculation steps and the processing method have been discussed in detail. From the analysis and the comparison of the obtained results, it is concluded that the proposed method is simple and effective, and it broadens the application of digital seismographic network.

  19. Trace level impurity method development with high-field asymmetric waveform ion mobility spectrometry: systematic study of factors affecting the performance.

    Science.gov (United States)

    Champarnaud, Elodie; Laures, Alice M-F; Borman, Phil J; Chatfield, Marion J; Kapron, James T; Harrison, Mark; Wolff, Jean-Claude

    2009-01-01

    For the determination of trace level impurities, analytical chemists are confronted with complex mixtures and difficult separations. New technologies such as high-field asymmetric waveform ion mobility spectrometry (FAIMS) have been developed to make their work easier; however, efficient method development and troubleshooting can be quite challenging if little prior knowledge of the factors or their settings is available. We present the results of an investigation performed in order to obtain a better understanding of the FAIMS technology. The influence of eight factors (polarity of dispersion voltage, outer bias voltage, total gas flow rate, composition of the carrier gas (e.g. %He), outer electrode temperature, ratio between the temperatures of the inner and outer electrodes, flow rate and composition of the make-up mobile phase) was assessed. Five types of responses were monitored: value of the compensation voltage (CV), intensity, width and asymmetry of the compensation voltage peak, and resolution between two peaks. Three types of studies were performed using different test mixtures and various ionisation modes to assess whether the same conclusions could be drawn across these conditions for a number of different types of compounds. To extract the maximum information from as few experiments as possible, a Design of Experiment (DoE) approach was used. The results presented in this work provide detailed information on the factors affecting FAIMS separations and therefore should enable the user to troubleshoot more effectively and to develop efficient methods.

  20. Multi-waveform classification for seismic facies analysis

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Li, Xingming; Hu, Guangmin

    2017-04-01

    Seismic facies analysis provides an effective way to delineate the heterogeneity and compartments within a reservoir. Traditional method is using the single waveform to classify the seismic facies, which does not consider the stratigraphy continuity, and the final facies map may affect by noise. Therefore, by defining waveforms in a 3D window as multi-waveform, we developed a new seismic facies analysis algorithm represented as multi-waveform classification (MWFC) that combines the multilinear subspace learning with self-organizing map (SOM) clustering techniques. In addition, we utilize multi-window dip search algorithm to extract multi-waveform, which reduce the uncertainty of facies maps in the boundaries. Testing the proposed method on synthetic data with different S/N, we confirm that our MWFC approach is more robust to noise than the conventional waveform classification (WFC) method. The real seismic data application on F3 block in Netherlands proves our approach is an effective tool for seismic facies analysis.

  1. A Computer Vision Method for 3D Reconstruction of Curves-Marked Free-Form Surfaces

    Institute of Scientific and Technical Information of China (English)

    Xiong Hanwei; Zhang Xiangwei

    2001-01-01

    Visual method is now broadly used in reverse engineering for 3D reconstruction. Thetraditional computer vision methods are feature-based, i.e., they require that the objects must revealfeatures owing to geometry or textures. For textureless free-form surfaces, dense feature points areadded artificially. In this paper, a new method is put forward combining computer vision with CAGD.The surface is subdivided into N-side Gregory patches using marked curves, and a stereo algorithm isused to reconstruct the curves. Then, the cross boundary tangent vector is computed throughreflectance analysis. At last, the whole surface can be reconstructed by jointing these patches withG1 continuity.

  2. Image reconstruction based on L1 regularization and projection methods for electrical impedance tomography.

    Science.gov (United States)

    Wang, Qi; Wang, Huaxiang; Zhang, Ronghua; Wang, Jinhai; Zheng, Yu; Cui, Ziqiang; Yang, Chengyi

    2012-10-01

    Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction in EIT is a nonlinear and ill-posed inverse problem. The Tikhonov method with L(2) regularization is always used to solve the EIT problem. However, the L(2) method always smoothes the sharp changes or discontinue areas of the reconstruction. Image reconstruction using the L(1) regularization allows addressing this difficulty. In this paper, a sum of absolute values is substituted for the sum of squares used in the L(2) regularization to form the L(1) regularization, the solution is obtained by the barrier method. However, the L(1) method often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive. In this paper, the projection method is combined with the L(1) regularization method to reduce the computational cost. The L(1) problem is mainly solved in the coarse subspace. This paper also discusses the strategies of choosing parameters. Both simulation and experimental results of the L(1) regularization method were compared with the L(2) regularization method, indicating that the L(1) regularization method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages. Furthermore, the projected L(1) method can also effectively reduce the computational time without affecting the quality of reconstructed images.

  3. Sparse Reconstruction Techniques in Magnetic Resonance Imaging: Methods, Applications, and Challenges to Clinical Adoption.

    Science.gov (United States)

    Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-06-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed.

  4. Accelerated gradient methods for total-variation-based CT image reconstruction

    CERN Document Server

    Jørgensen, Jakob Heide; Hansen, Per Christian; Jensen, Søren Holdt; Sidky, Emil Y; Pan, Xiaochuan

    2011-01-01

    Total-variation (TV)-based Computed Tomography (CT) image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is very well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is much more demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large-scale systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits slow convergence. In the present work we consider the use of two accelerated gradient-based methods, GPBB and UP...

  5. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    Science.gov (United States)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  6. A universal support vector machines based method for automatic event location in waveforms and video-movies: applications to massive nuclear fusion databases.

    Science.gov (United States)

    Vega, J; Murari, A; González, S

    2010-02-01

    Big physics experiments can collect terabytes (even petabytes) of data under continuous or long pulse basis. The measurement systems that follow the temporal evolution of physical quantities translate their observations into very large time-series data and video-movies. This article describes a universal and automatic technique to recognize and locate inside waveforms and video-films both signal segments with data of potential interest for specific investigations and singular events. The method is based on regression estimations of the signals using support vector machines. A reduced number of the samples is shown as outliers in the regression process and these samples allow the identification of both special signatures and singular points. Results are given with the database of the JET fusion device: location of sawteeth in soft x-ray signals to automate the plasma incremental diffusivity computation, identification of plasma disruptive behaviors with its automatic time instant determination, and, finally, recognition of potential interesting plasma events from infrared video-movies.

  7. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    Directory of Open Access Journals (Sweden)

    Songjun Zeng

    2010-01-01

    Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.

  8. A New Feature Points Reconstruction Method in Spacecraft Vision Navigation

    Directory of Open Access Journals (Sweden)

    Bing Hua

    2015-01-01

    Full Text Available The important applications of monocular vision navigation in aerospace are spacecraft ground calibration tests and spacecraft relative navigation. Regardless of the attitude calibration for ground turntable or the relative navigation between two spacecraft, it usually requires four noncollinear feature points to achieve attitude estimation. In this paper, a vision navigation system based on the least feature points is designed to deal with fault or unidentifiable feature points. An iterative algorithm based on the feature point reconstruction is proposed for the system. Simulation results show that the attitude calculation of the designed vision navigation system could converge quickly, which improves the robustness of the vision navigation of spacecraft.

  9. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    Science.gov (United States)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  10. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    Science.gov (United States)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  11. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Science.gov (United States)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  12. Variable density sampling based on physically plausible gradient waveform. Application to 3D MRI angiography

    CERN Document Server

    Chauffert, Nicolas; Boucher, Marianne; Mériaux, Sébastien; CIUCIU, Philippe

    2015-01-01

    Performing k-space variable density sampling is a popular way of reducing scanning time in Magnetic Resonance Imaging (MRI). Unfortunately, given a sampling trajectory, it is not clear how to traverse it using gradient waveforms. In this paper, we actually show that existing methods [1, 2] can yield large traversal time if the trajectory contains high curvature areas. Therefore, we consider here a new method for gradient waveform design which is based on the projection of unrealistic initial trajectory onto the set of hardware constraints. Next, we show on realistic simulations that this algorithm allows implementing variable density trajectories resulting from the piecewise linear solution of the Travelling Salesman Problem in a reasonable time. Finally, we demonstrate the application of this approach to 2D MRI reconstruction and 3D angiography in the mouse brain.

  13. Improved Reconstruction Quality of Bioluminescent Images by Combining SP3 Equations and Bregman Iteration Method

    Directory of Open Access Journals (Sweden)

    Qiang Wu

    2013-01-01

    Full Text Available Bioluminescence tomography (BLT has a great potential to provide a powerful tool for tumor detection, monitoring tumor therapy progress, and drug development; developing new reconstruction algorithms will advance the technique to practical applications. In the paper, we propose a BLT reconstruction algorithm by combining SP3 equations and Bregman iteration method to improve the quality of reconstructed sources. The numerical results for homogeneous and heterogeneous phantoms are very encouraging and give significant improvement over the algorithms without the use of SP3 equations and Bregman iteration method.

  14. Reconstruction method for samples with refractive index discontinuities in optical diffraction tomography

    Science.gov (United States)

    Ma, Xichao; Xiao, Wen; Pan, Feng

    2017-07-01

    We present a reconstruction method for samples containing localized refractive index (RI) discontinuities in optical diffraction tomography. Abrupt RI changes induce regional phase perturbations and random spikes, which will be expanded and strengthened by existing tomographic algorithms, resulting in contaminated reconstructions. This method avoids the disturbance by recognition and separation of the discontinuous regions, and recombination of individually reconstructed data. Three-dimensional RI distributions of two fusion spliced optical fibers with different typical discontinuities are demonstrated, showing distinctly detailed structures of the samples as well as the positions and estimated shapes of the discontinuities.

  15. Reconstruction method for x-ray imaging capsule

    Science.gov (United States)

    Rubin, Daniel; Lifshitz, Ronen; Bar-Ilan, Omer; Weiss, Noam; Shapiro, Yoel; Kimchy, Yoav

    2017-03-01

    A colon imaging capsule has been developed by Check-Cap Ltd (C-Scan® Cap). For the procedure, the patient swallows a small amount of standard iodinated contrast agent. To create images, three rotating X-ray beams are emitted towards the colon wall. Part of the X-ray photons are backscattered from the contrast medium and the colon. These photons are collected by an omnidirectional array of energy discriminating photon counting detectors (CdTe/CZT) within the capsule. X-ray fluorescence (XRF) and Compton backscattering photons pertain different energies and are counted separately by the detection electronics. The current work examines a new statistical approach for the algorithm that reconstructs the lining of the colon wall from the X-ray detector readings. The algorithm performs numerical optimization for finding the solution to the inverse problem applied to a physical forward model, reflecting the behavior of the system. The forward model that was employed, accounts for the following major factors: the two mechanisms of dependence between the distance to the colon wall and the number photons, directional scatter distributions, and relative orientations between beams and detectors. A calibration procedure has been put in place to adjust the coefficients of the forward model for the specific capsule geometry, radiation source characteristics, and the detector response. The performance of the algorithm was examined in phantom experiments and demonstrated high correlation between actual phantom shape and x-ray image reconstruction. Evaluation is underway to assess the algorithm performance in clinical setting.

  16. Data processing and image reconstruction methods for the HEAD PENN-PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Karp, J.S.; Becher, A.J.; Matej, S. [Univ. of Pennsylvania, Philadelphia, PA (United States). Dept. of Radiology; Kinahan, P.E. [Univ. of Pittsburgh, PA (United States). Dept. of Radiology

    1998-06-01

    Methods of reconstruction and quantitation are developed for a 3D system and are evaluated on the septa-less HEAD PENN-PET scanner, which has a very large axial acceptance angle ({theta}{sub max} = {+-}28{degree} in the center) and large axial field-of-view of 256 mm. To overcome the difficulties of data storage and reconstruction time with 3D reconstruction, the authors have reduced the size of the 4-D projection matrix required for 3D-RP reconstruction, and compared the results to the Fourier rebinning (FORE) algorithm. Both approaches achieve a favorable tradeoff in data storage requirements, reconstruction time, and accuracy that are suitable for clinical use. The authors have also studied the application of the FORE algorithm to transmission scans acquired with a singles point source ({sup 137}Cs) so that data quantitation can be performed.

  17. Methods of determining the optimal project of reconstruction of The Petrovsky Dock in Kronstadt

    Directory of Open Access Journals (Sweden)

    Romanovich Marina

    2016-01-01

    Full Text Available Today in Russia there are many historical monuments, which are in a derelict and not the operational state. So, this is actual question about their reconstruction with preservation of historical significance, with current technology, and innovation. One of these abandoned objects is The Petrovsky Dock in Kronstadt. Priority idea of its reconstruction it is museum. The Russian and foreign experience in development of the modern sea museums is analysed. In article the existing options of reconstruction of Dock are considered. The method of calculation of total optimality project coefficient of reconstruction is offered. Survey of experts showing what functions are most interesting to the museum is conducted. As a result, we determine the optimal reconstruction project of Dock with unique functionality.

  18. Research on the configuration design method of heterogeneous constellation reconstruction under the multiple objective and multiple constraint

    Science.gov (United States)

    Zhao, Shuang; Xu, Yanli; Dai, Huayu

    2017-05-01

    Aiming at the problem of configuration design of heterogeneous constellation reconstruction, a design method of heterogeneous constellation reconstruction based on multi objective and multi constraints is proposed. At first, the concept of heterogeneous constellation is defined. Secondly, the heterogeneous constellation reconstruction methods were analyzed, and then the two typical existing design methods of reconstruction, phase position uniformity method and reconstruction configuration design method based on optimization algorithm are summarized. The advantages and shortcomings of different reconstruction configuration design methods are compared, finally the heterogeneous constellation reconstruction configuration design is currently facing problems are analyzed and put forward the thinking about the reconstruction index system of heterogeneous constellation and the selection of optimal variables and the establishment of constraints in the optimization design of the configuration.

  19. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  20. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  1. Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction

    Science.gov (United States)

    Ding, Xiaoxi; He, Qingbo

    2016-12-01

    In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.

  2. Performance Evaluation of Super-Resolution Reconstruction Methods on Real-World Data

    Directory of Open Access Journals (Sweden)

    L. J. van Vliet

    2007-01-01

    Full Text Available The performance of a super-resolution (SR reconstruction method on real-world data is not easy to measure, especially as a ground-truth (GT is often not available. In this paper, a quantitative performance measure is used, based on triangle orientation discrimination (TOD. The TOD measure, simulating a real-observer task, is capable of determining the performance of a specific SR reconstruction method under varying conditions of the input data. It is shown that the performance of an SR reconstruction method on real-world data can be predicted accurately by measuring its performance on simulated data. This prediction of the performance on real-world data enables the optimization of the complete chain of a vision system; from camera setup and SR reconstruction up to image detection/recognition/identification. Furthermore, different SR reconstruction methods are compared to show that the TOD method is a useful tool to select a specific SR reconstruction method according to the imaging conditions (camera's fill-factor, optical point-spread-function (PSF, signal-to-noise ratio (SNR.

  3. Development of threedimensional optical correction method for reconstruction of flow field in droplet

    Science.gov (United States)

    Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan

    2015-11-01

    A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).

  4. Platform for Postprocessing Waveform-Based NDE

    Science.gov (United States)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image

  5. New Method of Reconstruction from Nonparallel Stereo and Application to Surgical Navigator

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new method to reconstruct 3D scene points from nonparallel stereo is proposed. From a pair of conjugate images in an arbitrarily configured stereo system that has been calibrated, coordinates of 3D scene points can be computed directly using the method, bypassing the process of rectifying images or iterative solution involved in existing methods. Experiment results from both simulated data and real images validate the method. Practical application to surgical navigator shows that the method has advantages to improve efficiency and accuracy of 3D reconstruction from nonparallel stereo system in comparison with the conventional method that employs algorithm for standard parallel axes stereo geometry.

  6. Adaptive Robust Waveform Selection for Unknown Target Detection in Clutter

    Institute of Scientific and Technical Information of China (English)

    Lu-Lu Wang; Hong-Qiang Wang; Yu-Liang Qin; Yong-Qiang Cheng

    2014-01-01

    @@@A basic assumption of most recently proposed waveform design algorithms is that the target impulse response is a known deterministic function or a stochastic process with a known power spectral density (PSD). However, it is well-known that a target impulse response is neither easily nor accurately obtained; besides it changes sharply with attitude angles. Both of the aforementioned cases complicate the waveform design process. In this paper, an adaptive robust waveform selection method for unknown target detection in clutter is proposed. The target impulse response is considered to be unknown but belongs to a known uncertainty set. An adaptive waveform library is devised by using a signal-to-clutter-plus-noise ratio (SCNR)- based optimal waveform design method. By applying the minimax robust waveform selection method, the optimal robust waveform is selected to ensure the lowest performance bound of the unknown target detection in clutter. Results show that the adaptive waveform library outperforms the predefined linear frequency modulation (LFM) waveform library on the SCNR bound.

  7. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization

    Directory of Open Access Journals (Sweden)

    Chengcai Leng

    2015-01-01

    Full Text Available Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method.

  8. NOISE AMPLIFICATION ANALYSIS AND COMPARISON OF TWO PERIODIC NONUNIFORM SAMPLING RECONSTRUCTION METHODS USED IN DPCA SAR

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    General Sampling Expansion Reconstruction Method(GSERM)and Digital Spectrum Reconstruction Method(DSRM),which prove effective to reconstruct azimuth signal of Displaced Phase Center Apertures(DPCA) Synthetic Aperture areal"(SAR)system from its Periodic Non-Uniform Sampling(PNUS)data sequences,would amplify the noise and sidelobe clutter simultaneously in the reconstruction.This paper formulates the relation of the system transfer matrixes of the above two methods,gives the properties,such aS periodicity,symmetry,and time-shift property,of their Noise and Sidelobe Clutter Amplification Factor(NSCAF),and discovers that DSRM is more sensitive than GSERM in the white noise environment.In addition,criteria based on initial sampling point analysis for the robust PRF selection are suggested.Computer simulation results support these con-clusions.

  9. A new method for choosing parameters in delay reconstruction-based forecast strategies

    CERN Document Server

    Garland, Joshua; Bradley, Elizabeth

    2015-01-01

    Delay-coordinate reconstruction is a proven modeling strategy for building effective forecasts of nonlinear time series. The first step in this process is the estimation of good values for two parameters, the time delay and the reconstruction dimension. Many heuristics and strategies have been proposed in the literature for estimating these values. Few, if any, of these methods were developed with forecasting in mind, however, and their results are not optimal for that purpose. Even so, these heuristics -- intended for other applications -- are routinely used when building delay coordinate reconstruction-based forecast models. In this paper, we propose a general framework for choosing optimal parameter values for forecast methods that are based on delay-coordinate reconstructions. The basic calculation involves maximizing the shared information between each delay vector and the future state of the system. We illustrate the effectiveness of this method on several synthetic and experimental systems, showing tha...

  10. An Overview of Radar Waveform Optimization for Target Detection

    Directory of Open Access Journals (Sweden)

    Wang Lulu

    2016-10-01

    Full Text Available An optimal waveform design method that fully employs the knowledge of the target and the environment can further improve target detection performance, thus is of vital importance to research. In this paper, methods of radar waveform optimization for target detection are reviewed and summarized and provide the basis for the research.

  11. Quantum optical waveform conversion

    CERN Document Server

    Kielpinski, D; Wiseman, HM

    2010-01-01

    Currently proposed architectures for long-distance quantum communication rely on networks of quantum processors connected by optical communications channels [1,2]. The key resource for such networks is the entanglement of matter-based quantum systems with quantum optical fields for information transmission. The optical interaction bandwidth of these material systems is a tiny fraction of that available for optical communication, and the temporal shape of the quantum optical output pulse is often poorly suited for long-distance transmission. Here we demonstrate that nonlinear mixing of a quantum light pulse with a spectrally tailored classical field can compress the quantum pulse by more than a factor of 100 and flexibly reshape its temporal waveform, while preserving all quantum properties, including entanglement. Waveform conversion can be used with heralded arrays of quantum light emitters to enable quantum communication at the full data rate of optical telecommunications.

  12. Ultrasound tomography imaging with waveform sound speed: parenchymal changes in women undergoing tamoxifen therapy

    Science.gov (United States)

    Sak, Mark; Duric, Neb; Littrup, Peter; Sherman, Mark; Gierach, Gretchen

    2017-03-01

    Ultrasound tomography (UST) is an emerging modality that can offer quantitative measurements of breast density. Recent breakthroughs in UST image reconstruction involve the use of a waveform reconstruction as opposed to a raybased reconstruction. The sound speed (SS) images that are created using the waveform reconstruction have a much higher image quality. These waveform images offer improved resolution and contrasts between regions of dense and fatty tissues. As part of a study that was designed to assess breast density changes using UST sound speed imaging among women undergoing tamoxifen therapy, UST waveform sound speed images were then reconstructed for a subset of participants. These initial results show that changes to the parenchymal tissue can more clearly be visualized when using the waveform sound speed images. Additional quantitative testing of the waveform images was also started to test the hypothesis that waveform sound speed images are a more robust measure of breast density than ray-based reconstructions. Further analysis is still needed to better understand how tamoxifen affects breast tissue.

  13. Analysis and implementation of electric power waveform data compression method with high compression ratio%高压缩比电力系统波形数据压缩方法的实现与性能分析

    Institute of Scientific and Technical Information of China (English)

    党三磊; 肖勇; 杨劲锋; 申妍华

    2013-01-01

    在对常用数据压缩、编码方法分析的基础上,充分利用电力系统波形数据的周期性、有界性和冗余性等特点,同时分别选用游程编码和EZW编码,在DSP平台上实现了基于DCT变换、提升小波变换的压缩方法.文章对两种压缩方法实现、性能和还原效果方面进行了全面分析,认为基于提升小波与EZW编码的压缩方法可记录数据突变特征,具有压缩比和还原精度可调等特点,更适合于压缩大量电力系统故障波形数据压缩.%Power quality monitor and waveform recorder are very important equipments for security and stability a-nalysis of the electric power system. In those equipments, the core technology is power system waveform data compression method with high compression ratio. In this paper, commonly used data compression and coding method are studied firstly. Taking advantage of characteristics of the power system waveform data such as periodic, bounded and redundancy, compression methods based on DCT transform and lifting wavelet transform are imple-mentated on the DSP platform. Then, implementation, performance and reduction effect of the two compression methods are comprehensively analyzed. It is found that the compression method based on lifting wavelet and EZW coding can record abrupt data changes and has the features of adjustable compression ratio and precision restoration. The method is more suitable for compression of large amounts of power system failure waveform data.

  14. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the

  15. Waveform interative techniques for device transient simulation on parallel machines

    Energy Technology Data Exchange (ETDEWEB)

    Lumsdaine, A. [Univ. of Notre Dame, IN (United States); Reichelt, M.W. [Massachusetts Institute of Technology, Cambridge, MA (United States)

    1993-12-31

    In this paper we describe our experiences with parallel implementations of several different waveform algorithms for performing transient simulation of semiconductor devices. Because of their inherent computation and communication structure, waveform methods are well suited to MIMD-type parallel machines having a high communication latency - such as a cluster of workstations. Experimental results using pWORDS, a parallel waveform-based device transient simulation program, in conjunction with PVM running on a cluster of eight workstations demonstrate that parallel waveform techniques are an efficient and faster alternative to standard simulation algorithms.

  16. Track reconstruction using the TSF method for the BESⅢ main drift chamber

    Institute of Scientific and Technical Information of China (English)

    LIU Qiu-Guang; FU Cheng-Dong; GAO Yuan-Ning; HE Kang-Lin; HE Miao; HUA Chun-Fei; HUANG Bin; HUANG Xing-Tao; JI Xiao-Bin; LI Fei; LI Hai-Bo; ZANG Shi-Lei; LI Wei-Dong; LIANG Yu-Wie; LIU Chun-Xiu; LIU Huai-Min; LIU Suo; LIU Ying-Jie; MA Qiu-Mei; MA Xiang; MAO Ya-Jun; MO Xiao-Hu; LI Wei-Guo; PAN Ming-Hua; PANG Cai-Ying; PING Rong-Gang; QIN Gang; QIN Ya-Hong; QIU Jin-Fa; SUN Sheng-Sen; SUN Yong-Zhao; WANG Ji-Ke; WANG Liang-Liang; MAO Ze-Pu; WEN Shuo-Pin; WU Ling-Sui; XIE Yu-Guang; XU Min; YAN Liang; YOU Zheng-Yun; YU Guo-Wei; YUAN Chang-Zheng; YUAN Ye; ZHANG Bing-Yun; BIAN Jian-Ming; ZHANG Chang-Chun; ZHANG Jian-Yong; ZHANG Xue-Yao; ZHANG Yao; ZHENG Yang-Heng; ZHU Ke-Jun; ZHU Yong-Sheng; ZHU Zhi-Li; ZOU Jia-Heng; CAO Guo-Fu; CAO Xue-Xiang; CHEN Shen-Jian; DENG Zi-Yan

    2008-01-01

    We describe the algorithm to reconstruct the charged tracks for BESⅢ main drift chamber at BEPCⅡ, including the track finding and fitting. With a new method of the Track Segment Finder (TSF),the results of present study indicate that the algorithm can reconstruct the charged tracks over a wide range of momentum with high efficiency, while improving the robustness against the background noise in the drift chamber. The overall performances, including spatial resolution, momentum resolution and secondary vertices reconstruction efficiency, etc. satisfy the requirements of BESⅢ experiment.

  17. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  18. Krylov subspace acceleration of waveform relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Lumsdaine, A.; Wu, Deyun [Univ. of Notre Dame, IN (United States)

    1996-12-31

    Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).

  19. Wavelet analysis of the impedance cardiogram waveforms

    Science.gov (United States)

    Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.

    2012-12-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  20. Reconstruction of incomplete satellite SST data sets based on EOF method

    Institute of Scientific and Technical Information of China (English)

    DING Youzhuan; WEI Zhihui; MAO Zhihua; WANG Xiaofei; PAN Delu

    2009-01-01

    As for the satellite remote sensing data obtained by the visible and infrared bands inversion, the clouds coverage in the sky over the ocean often results in missing data of inversion products on a large scale, and thin clouds difficult to be detected would cause the data of the inversion products to be abnormal. Alvera et al.(2005) proposed a method for the reconstruction of missing data based on an Empirical Orthogonal Functions (EOF) decomposition, but his method couldn't process these images presenting extreme cloud coverage(more than 95%), and required a long time for reconstruction. Besides, the abnormal data in the images had a great effect on the reconstruction result.Therefore, this paper tries to improve the study result. It has reconstructed missing data sets by twice applying EOF decomposition method. Firstly, the abnormity time has been detected by analyzing the temporal modes of EOF decomposition, and the abnormal data have been eliminated.Secondly, the data sets, excluding the abnormal data, are analyzed by using EOF decomposition,and then the temporal modes undergo a filtering process so as to enhance the ability of reconstructing the images which are of no or just a little data, by using EOF. At last, this method has been applied to a large data set, i.e. 43 Sea Surface Temperature (SST) satellite images of the Changjiang River (Yangtze River) estuary and its adjacent areas, and the total reconstruction root mean square error (RMSE) is 0.82℃. And it has been proved that this improved EOF reconstruction method is robust for reconstructing satellite missing data and unreliable data.

  1. Multigrid iterative method with adaptive spatial support for computed tomography reconstruction from few-view data

    Science.gov (United States)

    Lee, Ping-Chang

    2014-03-01

    Computed tomography (CT) plays a key role in modern medical system, whether it be for diagnosis or therapy. As an increased risk of cancer development is associated with exposure to radiation, reducing radiation exposure in CT becomes an essential issue. Based on the compressive sensing (CS) theory, iterative based method with total variation (TV) minimization is proven to be a powerful framework for few-view tomographic image reconstruction. Multigrid method is an iterative method for solving both linear and nonlinear systems, especially when the system contains a huge number of components. In medical imaging, image background is often defined by zero intensity, thus attaining spatial support of the image, which is helpful for iterative reconstruction. In the proposed method, the image support is not considered as a priori knowledge. Rather, it evolves during the reconstruction process. Based on the CS framework, we proposed a multigrid method with adaptive spatial support constraint. The simultaneous algebraic reconstruction (SART) with TV minimization is implemented for comparison purpose. The numerical result shows: 1. Multigrid method has better performance while less than 60 views of projection data were used, 2. Spatial support highly improves the CS reconstruction, and 3. When few views of projection data were measured, our method performs better than the SART+TV method with spatial support constraint.

  2. Resolution analysis in full waveform inversion

    NARCIS (Netherlands)

    Fichtner, A.; Trampert, J.

    2011-01-01

    We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic approximat

  3. Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states

    Directory of Open Access Journals (Sweden)

    Grünewald Stefan

    2011-01-01

    Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction

  4. A method for phase reconstruction in optical three-dimensional shape measurement

    Institute of Scientific and Technical Information of China (English)

    Qiao Nao-Sheng; He Zhi

    2012-01-01

    In optical three-dimensional shape measurement,a method of improving the measurement precision for phase reconstruction without phase unwrapping is analyzed in detail.Intensities of any five consecutive pixels that lie in the x-axis direction of the phase domain are given.Partial derivatives of the phase function in the x- and y-axis directions are obtained with a phase-shifting mechanism,the origin of which is analysed.Furthermore,to avoid phase unwrapping in the phase reconstruction,we derive the gradient of the phase function and perform a two-dimensional integral along the x- and y-axis directions.The reconstructed phase can be obtained directly by performing numerical integration,and thus it is of great convenience for phase reconstruction.Finally,the results of numerical simulations and practical experiments verify the correctness of the proposed method.

  5. A Fast Edge Preserving Bayesian Reconstruction Method for Parallel Imaging Applications in Cardiac MRI

    Science.gov (United States)

    Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi

    2010-01-01

    Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095

  6. A new skin flap method for total auricular reconstruction in microtia patients with a reconstructed ear canal: extended scalp and extended mastoid postauricular skin flaps.

    Science.gov (United States)

    Hwang, Euna; Kim, Young Soo; Chung, Seum

    2014-06-01

    Before visiting a plastic surgeon, some microtia patients may undergo canaloplasty for hearing improvement. In such cases, scarred tissues and the reconstructed external auditory canal in the postauricular area may cause a significant limitation in using the posterior auricular skin flap for ear reconstruction. In this article, we present a new method for auricular reconstruction in microtia patients with previous canaloplasty. By dividing a postauricular skin flap into an upper scalp extended skin flap and a lower mastoid extended skin flap at the level of a reconstructed external auditory canal, the entire anterior surface of the auricular framework can be covered with the two extended postauricular skin flaps. The reconstructed ear shows good color match and texture, with the entire anterior surface of the reconstructed ear being resurfaced with the skin flaps. Clinical question/level of evidence; therapeutic level IV.

  7. WFCatalog: A catalogue for seismological waveform data

    Science.gov (United States)

    Trani, Luca; Koymans, Mathijs; Atkinson, Malcolm; Sleeman, Reinoud; Filgueira, Rosa

    2017-09-01

    This paper reports advances in seismic waveform description and discovery leading to a new seismological service and presents the key steps in its design, implementation and adoption. This service, named WFCatalog, which stands for waveform catalogue, accommodates features of seismological waveform data. Therefore, it meets the need for seismologists to be able to select waveform data based on seismic waveform features as well as sensor geolocations and temporal specifications. We describe the collaborative design methods and the technical solution showing the central role of seismic feature catalogues in framing the technical and operational delivery of the new service. Also, we provide an overview of the complex environment wherein this endeavour is scoped and the related challenges discussed. As multi-disciplinary, multi-organisational and global collaboration is necessary to address today's challenges, canonical representations can provide a focus for collaboration and conceptual tools for agreeing directions. Such collaborations can be fostered and formalised by rallying intellectual effort into the design of novel scientific catalogues and the services that support them. This work offers an example of the benefits generated by involving cross-disciplinary skills (e.g. data and domain expertise) from the early stages of design, and by sustaining the engagement with the target community throughout the delivery and deployment process.

  8. The reconstruction of sound speed in the Marmousi model by the boundary control method

    CERN Document Server

    Ivanov, I B; Semenov, V S

    2016-01-01

    We present the results on numerical testing of the Boundary Control Method in the sound speed determination for the acoustic equation on semiplane. This method for solving multidimensional inverse problems requires no a priory information about the parameters under reconstruction. The application to the realistic Marmousi model demonstrates that the boundary control method is workable in the case of complicated and irregular field of acoustic rays. By the use of the chosen boundary controls, an `averaged' profile of the sound speed is recovered (the relative error is about $10-15\\%$). Such a profile can be further utilized as a starting approximation for high resolution iterative reconstruction methods.

  9. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    Directory of Open Access Journals (Sweden)

    David S. Smith

    2012-01-01

    Full Text Available Compressive sensing (CS has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.

  10. Accelerated gradient methods for total-variation-based CT image reconstruction

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian

    2011-01-01

    -based reconstruction is much more demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV...... criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem...... with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the gradient method....

  11. Seismic Waveform Inversion by Stochastic Optimization

    Directory of Open Access Journals (Sweden)

    Tristan van Leeuwen

    2011-01-01

    Full Text Available We explore the use of stochastic optimization methods for seismic waveform inversion. The basic principle of such methods is to randomly draw a batch of realizations of a given misfit function and goes back to the 1950s. The ultimate goal of such an approach is to dramatically reduce the computational cost involved in evaluating the misfit. Following earlier work, we introduce the stochasticity in waveform inversion problem in a rigorous way via a technique called randomized trace estimation. We then review theoretical results that underlie recent developments in the use of stochastic methods for waveform inversion. We present numerical experiments to illustrate the behavior of different types of stochastic optimization methods and investigate the sensitivity to the batch size and the noise level in the data. We find that it is possible to reproduce results that are qualitatively similar to the solution of the full problem with modest batch sizes, even on noisy data. Each iteration of the corresponding stochastic methods requires an order of magnitude fewer PDE solves than a comparable deterministic method applied to the full problem, which may lead to an order of magnitude speedup for waveform inversion in practice.

  12. Fractal Dimension of Voice-Signal Waveforms

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The fractal dimension is one important parameter that characterizes waveforms. In this paper, we derive a new method to calculate fractal dimension of digital voice-signal waveforms. We show that fractal dimension is an efficient tool for speaker recognition or speech recognition. It can be used to identify different speakers or distinguish speech. We apply our results to Chinese speaker recognition and numerical experiment shows that fractal dimension is an efficient parameter to characterize individual Chinese speakers. We have developed a semiautomatic voiceprint analysis system based on the theory of this paper and former researches.

  13. Image reconstruction in EIT with unreliable electrode data using random sample consensus method

    Science.gov (United States)

    Jeon, Min Ho; Khambampati, Anil Kumar; Kim, Bong Seok; In Kang, Suk; Kim, Kyung Youn

    2017-04-01

    In electrical impedance tomography (EIT), it is important to acquire reliable measurement data through EIT system for achieving good reconstructed image. In order to have reliable data, various methods for checking and optimizing the EIT measurement system have been studied. However, most of the methods involve additional cost for testing and the measurement setup is often evaluated before the experiment. It is useful to have a method which can detect the faulty electrode data during the experiment without any additional cost. This paper presents a method based on random sample consensus (RANSAC) to find the incorrect data on fault electrode in EIT data. RANSAC is a curve fitting method that removes the outlier data from measurement data. RANSAC method is applied with Gauss Newton (GN) method for image reconstruction of human thorax with faulty data. Numerical and phantom experiments are performed and the reconstruction performance of the proposed RANSAC method with GN is compared with conventional GN method. From the results, it can be noticed that RANSAC with GN has better reconstruction performance than conventional GN method with faulty electrode data.

  14. A constrained variable projection reconstruction method for photoacoustic computed tomography without accurate knowledge of transducer responses

    CERN Document Server

    Sheng, Qiwei; Matthews, Thomas P; Xia, Jun; Zhu, Liren; Wang, Lihong V; Anastasio, Mark A

    2015-01-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. When the imaging system employs conventional piezoelectric ultrasonic transducers, the ideal photoacoustic (PA) signals are degraded by the transducers' acousto-electric impulse responses (EIRs) during the measurement process. If unaccounted for, this can degrade the accuracy of the reconstructed image. In principle, the effect of the EIRs on the measured PA signals can be ameliorated via deconvolution; images can be reconstructed subsequently by application of a reconstruction method that assumes an idealized EIR. Alternatively, the effect of the EIR can be incorporated into an imaging model and implicitly compensated for during reconstruction. In either case, the efficacy of the correction can be limited by errors in the assumed EIRs. In this work, a joint optimization approach to PACT image r...

  15. Application of x-ray direct methods to surface reconstructions: The solution of projected superstructures

    Science.gov (United States)

    Torrelles, X.; Rius, J.; Boscherini, F.; Heun, S.; Mueller, B. H.; Ferrer, S.; Alvarez, J.; Miravitlles, C.

    1998-02-01

    The projections of surface reconstructions are normally solved from the interatomic vectors found in two-dimensional Patterson maps computed with the intensities of the in-plane superstructure reflections. Since for difficult reconstructions this procedure is not trivial, an alternative automated one based on the ``direct methods'' sum function [Rius, Miravitlles, and Allmann, Acta Crystallogr. A52, 634 (1996)] is shown. It has been applied successfully to the known c(4×2) reconstruction of Ge(001) and to the so-far unresolved In0.04Ga0.96As (001) p(4×2) surface reconstruction. For this last system we propose a modification of one of the models previously proposed for GaAs(001) whose characteristic feature is the presence of dimers along the fourfold direction.

  16. The approximate inversion as a reconstruction method in X-ray computerized tomography

    CERN Document Server

    Dietz, R L

    1999-01-01

    The mathematical model of the X-ray computerized tomography will be developed in the first chapter, the approximate inversion will be introduced, and the Radon Transform will be used as an example to demonstrate calculation of a reconstruction cone. In the second chapter, a reconstruction method for the parallel geometry is discussed, leading to derivation of the method for a fan-beam geometry. The approximate inversion calculated for the limited-angle case is presented as an example of incomplete data problems. As with complete data problems, numerical examples are given and the method is compared with existing other methods. 3D reconstruction is the topic of the third chapter. Although of no relevance in practice, a parallel geometry will be examined. No problems are encountered in transferring the reconstruction cone to the cone beam geometry, but only for a scanning curve which also is of no relevance in practice. A further reconstruction method is presented for curves fulfilling the so-called Tuy conditi...

  17. Nonlinear PET parametric image reconstruction with MRI information using kernel method

    Science.gov (United States)

    Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2017-03-01

    Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.

  18. Maximum-entropy weak lens reconstruction improved methods and application to data

    CERN Document Server

    Marshall, P J; Gull, S F; Bridle, S L

    2002-01-01

    We develop the maximum-entropy weak shear mass reconstruction method presented in earlier papers by taking each background galaxy image shape as an independent estimator of the reduced shear field and incorporating an intrinsic smoothness into the reconstruction. The characteristic length scale of this smoothing is determined by Bayesian methods. Within this algorithm the uncertainties due to the intrinsic distribution of galaxy shapes are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures can be calculated with corresponding uncertainties. We apply this method to two clusters taken from N-body simulations using mock observations corresponding to Keck LRIS and mosaiced HST WFPC2 fields. We demonstrate that the Bayesian choice of smoothing length is sensible and that masses within apertures (including one on a filamentary structure) are reliable. We apply the method to data taken on the cluster MS1054-03 using the Keck LRIS (Clowe et al. 2000) and HST (Hoekstra e...

  19. Video Frames Reconstruction Based on Time-Frequency Analysis and Hermite Projection Method

    Directory of Open Access Journals (Sweden)

    Krylov Andrey

    2010-01-01

    Full Text Available A method for temporal analysis and reconstruction of video sequences based on the time-frequency analysis and Hermite projection method is proposed. The S-method-based time-frequency distribution is used to characterize stationarity within the sequence. Namely, a sequence of DCT coefficients along the time axes is used to create a frequency-modulated signal. The reconstruction of nonstationary sequences is done using the Hermite expansion coefficients. Here, a small number of Hermite coefficients can be used, which may provide significant savings for some video-based applications. The results are illustrated with video examples.

  20. Novel Sampling and Reconstruction Method for Non-Bandlimited Impulse Signals

    Institute of Scientific and Technical Information of China (English)

    Feng Yang; Jian-Hao Hu; Shao-Qian Li

    2009-01-01

    To sample non-bandlimited impulse signals, an extremely high-sampling rate analog-to- digital converters (ADC) is required. Such an ADC is very difficult to be implemented with present semi- conductor technology. In this paper, a novel sampling and reconstruction method for impulse signals is proposed. The required sampling rate of the proposed method is close to the signal innovation rate, which is much lower than the Nyquist rate in conventional Shannon sampling theory. Analysis and simulation results show that the proposed method can achieve very good reconstruction performance in the presence of noise.

  1. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    Energy Technology Data Exchange (ETDEWEB)

    Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)

    2012-06-15

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the

  2. Detection identification method of verification instrument for electrocardiograph ECG combination test waveforms%心电图机检定仪ECG组合测试波形的检测识别

    Institute of Scientific and Technical Information of China (English)

    武晓东; 徐森; 何进雪

    2012-01-01

      This paper puts forward the Detection identification method of verification instrument for Electrocardiograph ECG combination waveform, this method is based on the virtual instrument technology, use the data acquisition card combination waveform test of ECG acquisition, through the programming electrocardiogram machine retrieve LabVIEW set ECG combination waveform test instrument of the band the amplitude parameters and time interval parameters testing and recognition.%  提出心电图机检定仪输出的ECG组合测试波形的检测识别方法,基于虚拟仪器技术,使用数据采集卡对ECG组合测试波形进行采集,通过LabVIEW编程实现心电图机检定仪ECG组合测试波形的各波段的幅值和时间间隔参数的检测和识别。

  3. Noise reduction in computed tomography using a multiplicative continuous-time image reconstruction method

    Science.gov (United States)

    Yamaguchi, Yusaku; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    In clinical X-ray computed tomography (CT), filtered back-projection as a transform method and iterative reconstruction such as the maximum-likelihood expectation-maximization (ML-EM) method are known methods to reconstruct tomographic images. As the other reconstruction method, we have presented a continuous-time image reconstruction (CIR) system described by a nonlinear dynamical system, based on the idea of continuous methods for solving tomographic inverse problems. Recently, we have also proposed a multiplicative CIR system described by differential equations based on the minimization of a weighted Kullback-Leibler divergence. We prove theoretically that the divergence measure decreases along the solution to the CIR system, for consistent inverse problems. In consideration of the noisy nature of projections in clinical CT, the inverse problem belongs to the category of ill-posed problems. The performance of a noise-reduction scheme for a new (previously developed) CIR system was investigated by means of numerical experiments using a circular phantom image. Compared to the conventional CIR and the ML-EM methods, the proposed CIR method has an advantage on noisy projection with lower signal-to-noise ratios in terms of the divergence measure on the actual image under the same common measure observed via the projection data. The results lead to the conclusion that the multiplicative CIR method is more effective and robust for noise reduction in CT compared to the ML-EM as well as conventional CIR methods.

  4. A new method to reconstruct intra-fractional prostate motion in volumetric modulated arc therapy

    Science.gov (United States)

    Chi, Y.; Rezaeian, N. H.; Shen, C.; Zhou, Y.; Lu, W.; Yang, M.; Hannan, R.; Jia, X.

    2017-07-01

    Intra-fractional motion is a concern during prostate radiation therapy, as it may cause deviations between planned and delivered radiation doses. Because accurate motion information during treatment delivery is critical to address dose deviation, we developed the projection marker matching method (PM3), a novel method for prostate motion reconstruction in volumetric modulated arc therapy. The purpose of this method is to reconstruct in-treatment prostate motion trajectory using projected positions of implanted fiducial markers measured in kV x-ray projection images acquired during treatment delivery. We formulated this task as a quadratic optimization problem. The objective function penalized the distance from the reconstructed 3D position of each fiducial marker to the corresponding straight line, defined by the x-ray projection of the marker. Rigid translational motion of the prostate and motion smoothness along the temporal dimension were assumed and incorporated into the optimization model. We tested the motion reconstruction method in both simulation and phantom experimental studies. We quantified the accuracy using 3D normalized root-mean-square (RMS) error defined as the norm of a vector containing ratios between the absolute RMS errors and corresponding motion ranges in three dimensions. In the simulation study with realistic prostate motion trajectories, the 3D normalized RMS error was on average ~0.164 (range from 0.097 to 0.333 ). In an experimental study, a prostate phantom was driven to move along a realistic prostate motion trajectory. The 3D normalized RMS error was ~0.172 . We also examined the impact of the model parameters on reconstruction accuracy, and found that a single set of parameters can be used for all the tested cases to accurately reconstruct the motion trajectories. The motion trajectory derived by PM3 may be incorporated into novel strategies, including 4D dose reconstruction and adaptive treatment replanning to address motion

  5. Local and Non-local Regularization Techniques in Emission (PET/SPECT) Tomographic Image Reconstruction Methods.

    Science.gov (United States)

    Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem

    2016-06-01

    Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.

  6. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  7. Measuring the misfit between seismograms using an optimal transport distance: application to full waveform inversion

    Science.gov (United States)

    Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.

    2016-04-01

    Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.

  8. A sampling method for the reconstruction of a periodic interface in a layered medium

    Science.gov (United States)

    Sun, Guanying; Zhang, Ruming

    2016-07-01

    In this paper, we consider the inverse problem of reconstructing periodic interfaces in a two-layered medium with TM-mode. We propose a sampling-type method to recover the top periodic interface from the near-field data measured on a straight line above the total structure. Finally, numerical experiments are illustrated to show the effectiveness of the method.

  9. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    Science.gov (United States)

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…

  10. The study of lossy compressive method with different interpolation for holographic reconstruction in optical scanning holography

    Directory of Open Access Journals (Sweden)

    HU Zhijuan

    2015-08-01

    Full Text Available The lossy hologram compression method with three different interpolations is investigated to compress images holographically recorded with optical scanning holography.Without loss of major reconstruction details,results have shown that the lossy compression method is able to achieve high compression ratio of up to 100.

  11. The regularized blind tip reconstruction algorithm as a scanning probe microscopy tip metrology method

    CERN Document Server

    Jozwiak, G; Masalska, A; Gotszalk, T; Ritz, I; Steigmann, H

    2011-01-01

    The problem of an accurate tip radius and shape characterization is very important for determination of surface mechanical and chemical properties on the basis of the scanning probe microscopy measurements. We think that the most favorable methods for this purpose are blind tip reconstruction methods, since they do not need any calibrated characterizers and might be performed on an ordinary SPM setup. As in many other inverse problems also in case of these methods the stability of the solution in presence of vibrational and electronic noise needs application of so called regularization techniques. In this paper the novel regularization technique (Regularized Blind Tip Reconstruction - RBTR) for blind tip reconstruction algorithm is presented. It improves the quality of the solution in presence of isotropic and anisotropic noise. The superiority of our approach is proved on the basis of computer simulations and analysis of images of the Budget Sensors TipCheck calibration standard. In case of characterization ...

  12. Electronics via waveform analysis

    CERN Document Server

    Craig, Edwin C

    1993-01-01

    The author believes that a good basic understanding of electronics can be achieved by detailed visual analyses of the actual voltage waveforms present in selected circuits. The voltage waveforms included in this text were photographed using a 35-rrun camera in an attempt to make the book more attractive. This book is intended for the use of students with a variety of backgrounds. For this reason considerable material has been placed in the Appendix for those students who find it useful. The Appendix includes many basic electricity and electronic concepts as well as mathematical derivations that are not vital to the understanding of the circuit being discussed in the text at that time. Also some derivations might be so long that, if included in the text, it could affect the concentration of the student on the circuit being studied. The author has tried to make the book comprehensive enough so that a student could use it as a self-study course, providing one has access to adequate laboratory equipment.

  13. Waveform inversion of volcano-seismic signals for an extended source

    Science.gov (United States)

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully

  14. Cosmic Web Reconstruction through Density Ridges: Method and Algorithm

    CERN Document Server

    Chen, Yen-Chi; Freeman, Peter E; Genovese, Christopher R; Wasserman, Larry

    2015-01-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictates the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the Subspace Constrained Mean Shift (SCMS) algorithm (Ozertem and Erdogmus (2011); Genovese et al. (2012)) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS method to datasets sampled from the P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA and to LOWZ and CMASS data fro...

  15. METHOD OF DETERMINING ECONOMICAL EFFICIENCY OF HOUSING STOCK RECONSTRUCTION IN A CITY

    Directory of Open Access Journals (Sweden)

    Petreneva Ol’ga Vladimirovna

    2016-03-01

    Full Text Available RECONSTRUCTION IN A CITY The demand in comfortable housing has always been very high. The building density is not the same in different regions and sometimes there is no land for new housing construction, especially in the central regions of cities. Moreover, in many cities cultural and historical centers remain, which create the historical appearance of the city, that’s why new construction is impossible in these regions. Though taking into account the depreciation and obsolescence, the operation life of many buildings come to an end, they fall into disrepair. In these cases there arises a question on the reconstruction of the existing residential, public and industrial buildings. The aim of the reconstruction is bringing the existing worn-out building stock into correspondence with technical, social and sanitary requirements and living standards and conditions. The authors consider the currency and reasons for reconstruction of residential buildings. They attempt to answer the question, what is more economical efficient: new construction or reconstruction of residential buildings. The article offers a method to calculate the efficiency of residential buildings reconstruction.

  16. Comparing five alternative methods of breast reconstruction surgery: a cost-effectiveness analysis.

    Science.gov (United States)

    Grover, Ritwik; Padula, William V; Van Vliet, Michael; Ridgway, Emily B

    2013-11-01

    The purpose of this study was to assess the cost-effectiveness of five standardized procedures for breast reconstruction to delineate the best reconstructive approach in postmastectomy patients in the settings of nonirradiated and irradiated chest walls. A decision tree was used to model five breast reconstruction procedures from the provider perspective to evaluate cost-effectiveness. Procedures included autologous flaps with pedicled tissue, autologous flaps with free tissue, latissimus dorsi flaps with breast implants, expanders with implant exchange, and immediate implant placement. All methods were compared with a "do-nothing" alternative. Data for model parameters were collected through a systematic review, and patient health utilities were calculated from an ad hoc survey of reconstructive surgeons. Results were measured in cost (2011 U.S. dollars) per quality-adjusted life-year. Univariate sensitivity analyses and Bayesian multivariate probabilistic sensitivity analysis were conducted. Pedicled autologous tissue and free autologous tissue reconstruction were cost-effective compared with the do-nothing alternative. Pedicled autologous tissue was the slightly more cost-effective of the two. The other procedures were not found to be cost-effective. The results were robust to a number of sensitivity analyses, although the margin between pedicled and free autologous tissue reconstruction is small and affected by some parameter values. Autologous pedicled tissue was slightly more cost-effective than free tissue reconstruction in irradiated and nonirradiated patients. Implant-based techniques were not cost-effective. This is in agreement with the growing trend at academic institutions to encourage autologous tissue reconstruction because of its natural recreation of the breast contour, suppleness, and resiliency in the setting of irradiated recipient beds.

  17. Convergence analysis for column-action methods in image reconstruction

    DEFF Research Database (Denmark)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2016-01-01

    . We present a convergence analysis of the column algorithms, we discuss two techniques (loping and flagging) for reducing the work, and we establish some convergence results for methods that utilize these techniques. The performance of the algorithms is illustrated with numerical examples from...

  18. Gaining insight into food webs reconstructed by the inverse method

    NARCIS (Netherlands)

    Kones, J.; Soetaert, K.E.R.; Van Oevelen, D.; Owino, J.; Mavuti, K.

    2006-01-01

    The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one

  19. An extended stochastic reconstruction method for catalyst layers in proton exchange membrane fuel cells

    Science.gov (United States)

    Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun

    2016-09-01

    This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.

  20. A novel building boundary reconstruction method based on lidar data and images

    Science.gov (United States)

    Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian

    2013-09-01

    Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.

  1. Development of a method for reconstruction of crowded NMR spectra from undersampled time-domain data

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, Takumi; Yoshiura, Chie; Matsumoto, Masahiko; Kofuku, Yutaka; Okude, Junya; Kondo, Keita; Shiraishi, Yutaro [The University of Tokyo, Graduate School of Pharmaceutical Sciences (Japan); Takeuchi, Koh [Japan Science and Technology Agency, Precursory Research for Embryonic Science and Technology (Japan); Shimada, Ichio, E-mail: shimada@iw-nmr.f.u-tokyo.ac.jp [The University of Tokyo, Graduate School of Pharmaceutical Sciences (Japan)

    2015-05-15

    NMR is a unique methodology for obtaining information about the conformational dynamics of proteins in heterogeneous biomolecular systems. In various NMR methods, such as transferred cross-saturation, relaxation dispersion, and paramagnetic relaxation enhancement experiments, fast determination of the signal intensity ratios in the NMR spectra with high accuracy is required for analyses of targets with low yields and stabilities. However, conventional methods for the reconstruction of spectra from undersampled time-domain data, such as linear prediction, spectroscopy with integration of frequency and time domain, and analysis of Fourier, and compressed sensing were not effective for the accurate determination of the signal intensity ratios of the crowded two-dimensional spectra of proteins. Here, we developed an NMR spectra reconstruction method, “conservation of experimental data in analysis of Fourier” (Co-ANAFOR), to reconstruct the crowded spectra from the undersampled time-domain data. The number of sampling points required for the transferred cross-saturation experiments between membrane proteins, photosystem I and cytochrome b{sub 6}f, and their ligand, plastocyanin, with Co-ANAFOR was half of that needed for linear prediction, and the peak height reduction ratios of the spectra reconstructed from truncated time-domain data by Co-ANAFOR were more accurate than those reconstructed from non-uniformly sampled data by compressed sensing.

  2. Environment-based pin-power reconstruction method for homogeneous core calculations

    Energy Technology Data Exchange (ETDEWEB)

    Leroyer, H.; Brosselard, C.; Girardi, E. [EDF R and D/SINETICS, 1 av du General de Gaulle, F92141 Claman Cedex (France)

    2012-07-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  3. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    Energy Technology Data Exchange (ETDEWEB)

    Zeile, Christian, E-mail: christian.zeile@kit.edu; Maione, Ivan A.

    2015-10-15

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  4. Towards a d-bar reconstruction method for three-dimensional EIT

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Knudsen, Kim

    Three-dimensional electrical impedance tomography (EIT) is considered. Both uniqueness proofs and theoretical reconstruction algorithms available for this problem rely on the use of exponentially growing solutions to the governing conductivity equation. The study of those solutions is continued...... here. It is shown that exponentially growing solutions exist for low complex frequencies without imposing any regularity assumption on the conductivity. Further, a reconstruction method for conductivities close to a constant is given. In this method the complex frequency is taken to zero instead...

  5. A new 3D reconstruction method of small solar system bodies

    Science.gov (United States)

    Capanna, C.; Jorda, L.; Lamy, P.; Gesquiere, G.

    2011-10-01

    The 3D reconstruction of small solar system bodies consitutes an essential step toward understanding and interpreting their physical and geological properties. We propose a new reconstruction method by photoclinometry based on the minimization of the chisquare difference between observed and synthetic images by deformation of a 3D triangular mesh. This method has been tested on images of the two asteroids (2867) Steins and (21) Lutetia observed during ESA's ROSETTA mission, and it will be applied to elaborate digital terrain models from images of the asteroid (4) Vesta, the target of NASA's DAWN spacecraft.

  6. 变压器三相涌流波形特征分析及判别方法研究%Analysis and Assessment Methods of Waveform Characteristics of Three-Phase Transformer Inrush Current

    Institute of Scientific and Technical Information of China (English)

    卓元志; 李康; 赵斌; 韩斌; 赵雪沉璎

    2012-01-01

    With two big waveform characteristics of the transformer excitation inrush current revealed, this paper makes full use of the waveform characteristics,and then proposes a method of distinguishing the transformer excitation inrush current and internal fault current. This method is based on the assumption that the inrush current waveform presents the spire concave arc characteristics and fault current presents the sine waveform characteristics, and in half a cycle samples values can find the extreme value point,frequency, forming the corresponding virtual sine wave. Judging the similarity between the closing current and the tectonic waveform in half-wave cycle can distinguish between the excitation inrush current and fault current. The simulation results show that this method can effectively and quickly remove the internal faults of the transformer, and is not influenced by the aperiodic current components.%在揭示变压器励磁涌流两大波形特征的基础上,充分利用波形特征,提出一种利用波形特征区分变压器励磁涌流和内部故障电流的方法,该方法基于涌流波形呈现出尖顶波的凹弧特征,而故障电流基本保持正弦波形特征,在半波周期内找到采样值的极值点及频率,构造出与之对应的虚拟正弦波形,通过判断合闸电流与构造波形在半波周期内的相似程度来区分励磁涌流和故障电流.仿真结果表明,该方法能够有效、快速切除变压器内部故障,并且不受电流非周期分量的影响.

  7. Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods

    Science.gov (United States)

    2008-04-01

    isocentric motion in breast tomosynthesis. We have published our results in Medical Physics , the premiere peer-reviewed journal in the field of... Medical Physics ; please see Appendix #1 for the reprinted publication. 1.2. Characterize the effect of three acquisition parameters including total...working on a Medical Physics journal manuscript preparation for GFB algorithm. We have used impulse response and MTF analysis method to compare BP and

  8. Unhappy triad in limb reconstruction: Management by Ilizarov method

    Science.gov (United States)

    El-Alfy, Barakat Sayed

    2017-01-01

    AIM To evaluate the results of the Ilizarov method in management of cases with bone loss, soft tissue loss and infection. METHODS Twenty eight patients with severe leg trauma complicated by bone loss, soft tissue loss and infection were managed by distraction osteogenesis in our institution. After radical debridement of all the infected and dead tissues the Ilizarov frame was applied, corticotomy was done and bone transport started. The wounds were left open to drain. Partial limb shortening was done in seven cases to reduce the size of both the skeletal and soft tissue defects. The average follow up period was 39 mo (range 27-56 mo). RESULTS The infection was eradicated in all cases. All the soft tissue defects healed during bone transport and plastic surgery was only required in 2 cases. Skeletal defects were treated in all cases. All patients required another surgery at the docking site to fashion the soft tissue and to cover the bone ends. The external fixation time ranged from 9 to 17 mo with an average of 13 mo. The complications included pin tract infection in 16 cases, wire breakage in 2 cases, unstable scar in 4 cases and chronic edema in 3 cases. According to the association for study and application of methods of Ilizarov score the bone results were excellent in 10, good in 16 and fair in 2 cases while the functional results were excellent in 8, good in 17 and fair in 3 cases. CONCLUSION Distraction osteogenesis is a good method that can treat the three problems of this triad simultaneously. PMID:28144578

  9. Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms

    Science.gov (United States)

    Pürrer, Michael; LVC Collaboration

    2016-03-01

    We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.

  10. Optimizing defibrillation waveforms for ICDs.

    Science.gov (United States)

    Kroll, Mark W; Swerdlow, Charles D

    2007-04-01

    While no simple electrical descriptor provides a good measure of defibrillation efficacy, the waveform parameters that most directly influence defibrillation are voltage and duration. Voltage is a critical parameter for defibrillation because its spatial derivative defines the electrical field that interacts with the heart. Similarly, waveform duration is a critical parameter because the shock interacts with the heart for the duration of the waveform. Shock energy is the most often cited metric of shock strength and an ICD's capacity to defibrillate, but it is not a direct measure of shock effectiveness. Despite the physiological complexities of defibrillation, a simple approach in which the heart is modeled as passive resistor-capacitor (RC) network has proved useful for predicting efficient defibrillation waveforms. The model makes two assumptions: (1) The goal of both a monophasic shock and the first phase of a biphasic shock is to maximize the voltage change in the membrane at the end of the shock for a given stored energy. (2) The goal of the second phase of a biphasic shock is to discharge the membrane back to the zero potential, removing the charge deposited by the first phase. This model predicts that the optimal waveform rises in an exponential upward curve, but such an ascending waveform is difficult to generate efficiently. ICDs use electronically efficient capacitive-discharge waveforms, which require truncation for effective defibrillation. Even with optimal truncation, capacitive-discharge waveforms require more voltage and energy to achieve the same membrane voltage than do square waves and ascending waveforms. In ICDs, the value of the shock output capacitance is a key intermediary in establishing the relationship between stored energy-the key determinant of ICD size-and waveform voltage as a function of time, the key determinant of defibrillation efficacy. The RC model predicts that, for capacitive-discharge waveforms, stored energy is minimized

  11. Wideband pulse reconstruction from sparse spectral-amplitude data. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Casey, K.F.; Baertlein, B.A.

    1993-01-01

    Methods are investigated for reconstructing a wideband time-domain pulse waveform from a sparse set of samples of its frequency-domain amplitude spectrum. Approaches are outlined which comprise various means of spectrum interpolation followed by phase retrieval. Methods for phase retrieval are reviewed, and it is concluded that useful results can only be obtained by assuming a minimum-phase solution. Two reconstruction algorithms` are proposed. The first is based upon the use of Cauchy`s technique for estimating the amplitude spectrum in the form of a ratio of polynomials. The second uses B-spline interpolation among the sampled values to reconstruct this spectrum. Reconstruction of the time-domain waveform via inverse Fourier transformation follows, based on the assumption of minimum phase. Representative numerical results are given.

  12. A new method for the reconstruction of micro- and nanoscale planar periodic structures.

    Science.gov (United States)

    Hu, Zhenxing; Xie, Huimin; Lu, Jian; Liu, Zhanwei; Wang, Qinghua

    2010-08-01

    In recent years, the micro- and nanoscale structures and materials are observed and characterized under microscopes with large magnification at the cost of small view field. In this paper, a new phase-shifting inverse geometry moiré method for the full-field reconstruction of micro- and nanoscale planar periodic structures is proposed. The random phase shift techniques are realized under the scanning types of microscopes. A simulation test and a practical verification experiment were performed, which demonstrate this method is feasible. As an application, the method was used to reconstruct the structure of a butterfly wing and a holographic grating. The results verify the reconstruction process is convenient. When being compared with the direct measurement method using point-by-point way, the method is very effective with a large view field. This method can be extended to reconstruct other planar periodic microstructures and to locate the defects in material possessing the regular lattice structure. Furthermore, it can be applied to evaluate the quality of micro- and nanoscale planar periodic structures under various high-power scanning microscopes.

  13. Estimation of central aortic pressure waveform features derived from the brachial cuff volume displacement waveform.

    Science.gov (United States)

    Butlin, Mark; Qasem, Ahmad; Avolio, Alberto P

    2012-01-01

    There is increasing interest in non-invasive estimation of central aortic waveform parameters in the clinical setting. However, controversy has arisen around radial tonometric based systems due to the requirement of a trained operator or lack of ease of use, especially in the clinical environment. A recently developed device utilizes a novel algorithm for brachial cuff based assessment of aortic pressure values and waveform (SphygmoCor XCEL, AtCor Medical). The cuff was inflated to 10 mmHg below an individual's diastolic blood pressure and the brachial volume displacement waveform recorded. The aortic waveform was derived using proprietary digital signal processing and transfer function applied to the recorded waveform. The aortic waveform was also estimated using a validated technique (radial tonometry based assessment, SphygmoCor, AtCor Medical). Measurements were taken in triplicate with each device in 30 people (17 female) aged 22 to 79 years of age. An average for each device for each individual was calculated, and the results from the two devices were compared using regression and Bland-Altman analysis. A high correlation was found between the devices for measures of aortic systolic (R(2)=0.99) and diastolic (R(2)=0.98) pressure. Augmentation index and subendocardial viability ratio both had a between device R(2) value of 0.82. The difference between devices for measured aortic systolic pressure was 0.5±1.8 mmHg, and for augmentation index, 1.8±7.0%. The brachial cuff based approach, with an individualized sub-diastolic cuff pressure, provides an operator independent method of assessing not only systolic pressure, but also aortic waveform features, comparable to existing validated tonometric-based methods.

  14. A new method for three-dimensional laparoscopic ultrasound model reconstruction

    DEFF Research Database (Denmark)

    Fristrup, C W; Pless, T; Durup, J;

    2004-01-01

    was to perform a volumetric test and a clinical feasibility test of a new 3D method using standard laparoscopic ultrasound equipment. METHODS: Three-dimensional models were reconstructed from a series of two-dimensional ultrasound images using either electromagnetic tracking or a new 3D method. The volumetric...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...... accurate models comparable to findings at surgery and pathology. CONCLUSIONS: The use of the new 3D method is technically feasible, and its volumetrically, accurate compared to 3D with electromagnetic tracking....

  15. Low Rank Alternating Direction Method of Multipliers Reconstruction for MR Fingerprinting

    CERN Document Server

    Assländer, Jakob; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2016-01-01

    Purpose The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for Magnetic Resonance Fingerprinting (MRF). Methods Based on a singular value decomposition (SVD) of the signal evolution, MRF is formulated as a low rank inverse problem in which one image is reconstructed for each singular value under consideration. This low rank approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the low rank approximation improves the conditioning of the problem, which is further improved by extending the low rank inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers (ADMM). The root mean square error and the noise propagation are analyzed in simulations. For verification, an in vivo example is provided. Results The proposed low rank ADMM approach shows a reduced root mean square error compared to the original fingerprinting reconstructi...

  16. On sparse reconstructions in near-field acoustic holography using the method of superposition

    CERN Document Server

    Abusag, Nadia M

    2016-01-01

    The method of superposition is proposed in combination with a sparse $\\ell_1$ optimisation algorithm with the aim of finding a sparse basis to accurately reconstruct the structural vibrations of a radiating object from a set of acoustic pressure values on a conformal surface in the near-field. The nature of the reconstructions generated by the method differs fundamentally from those generated via standard Tikhonov regularisation in terms of the level of sparsity in the distribution of charge strengths specifying the basis. In many cases, the $\\ell_1$ optimisation leads to a solution basis whose size is only a small fraction of the total number of measured data points. The effects of changing the wavenumber, the internal source surface and the (noisy) acoustic pressure data in general will all be studied with reference to a numerical study on a cuboid of similar dimensions to a typical loudspeaker cabinet. The development of sparse and accurate reconstructions has a number of advantageous consequences includin...

  17. Reconstruction from Uniformly Attenuated SPECT Projection Data Using the DBH Method

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.; Gullberg, Grant T.

    2008-03-20

    An algorithm was developed for the two-dimensional (2D) reconstruction of truncated and non-truncated uniformly attenuated data acquired from single photon emission computed tomography (SPECT). The algorithm is able to reconstruct data from half-scan (180o) and short-scan (180?+fan angle) acquisitions for parallel- and fan-beam geometries, respectively, as well as data from full-scan (360o) acquisitions. The algorithm is a derivative, backprojection, and Hilbert transform (DBH) method, which involves the backprojection of differentiated projection data followed by an inversion of the finite weighted Hilbert transform. The kernel of the inverse weighted Hilbert transform is solved numerically using matrix inversion. Numerical simulations confirm that the DBH method provides accurate reconstructions from half-scan and short-scan data, even when there is truncation. However, as the attenuation increases, finer data sampling is required.

  18. Application of information theory methods to food web reconstruction

    Science.gov (United States)

    Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.

    2007-01-01

    In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.

  19. A comparative study of interface reconstruction methods for multi-material ALE simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kucharik, Milan [Los Alamos National Laboratory; Garimalla, Rao [Los Alamos National Laboratory; Schofield, Samuel [Los Alamos National Laboratory; Shashkov, Mikhail [Los Alamos National Laboratory

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  20. Incomplete Phase Space Reconstruction Method Based on Subspace Adaptive Evolution Approximation

    Directory of Open Access Journals (Sweden)

    Tai-fu Li

    2013-01-01

    Full Text Available The chaotic time series can be expanded to the multidimensional space by phase space reconstruction, in order to reconstruct the dynamic characteristics of the original system. It is difficult to obtain complete phase space for chaotic time series, as a result of the inconsistency of phase space reconstruction. This paper presents an idea of subspace approximation. The chaotic time series prediction based on the phase space reconstruction can be considered as the subspace approximation problem in different neighborhood at different time. The common static neural network approximation is suitable for a trained neighborhood, but it cannot ensure its generalization performance in other untrained neighborhood. The subspace approximation of neural network based on the nonlinear extended Kalman filtering (EKF is a dynamic evolution approximation from one neighborhood to another. Therefore, in view of incomplete phase space, due to the chaos phase space reconstruction, we put forward subspace adaptive evolution approximation method based on nonlinear Kalman filtering. This method is verified by multiple sets of wind speed prediction experiments in Wulong city, and the results demonstrate that it possesses higher chaotic prediction accuracy.

  1. Research on the reconstruction method of porous media using multiple-point geostatistics

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The pore structural characteristics have been the key to the studies on the mechanisms of fluids flow in porous media. With the development of experimental technology, the modern high-resolution equipments are capable of capturing pore structure images with a resolution of microns. But so far only 3D volume data of millimeter-scale rock samples can be obtained losslessly. It is necessary to explore the way of virtually reconstructing larger volume digital samples of porous media with the representative structural characteristics of the pore space. This paper proposes a reconstruction method of porous media using the structural characteristics captured by the data templates of multiple-point geostatistics. In this method, the probability of each structural characteristic of a pore space is acquired first, and then these characteristics are reproduced according to the probabilities to present the real structural characteristics in the reconstructed images. Our experimental results have shown that: (i) the deviation of LBM computed permeability respectively on the virtually reconstructed sandstone and the original sample is less than 1.2%; (ii) the reconstructed sandstone and the original sample have similar structural characteristics demonstrated by the variogram curves.

  2. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    Science.gov (United States)

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  3. Hepatic CT perfusion measurements: A feasibility study for radiation dose reduction using new image reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Negi, Noriyuki, E-mail: noriyuki@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Yoshikawa, Takeshi, E-mail: yoshikawa0816@aol.com [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Ohno, Yoshiharu, E-mail: yosirad@kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Somiya, Yuichiro, E-mail: somiya13@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Sekitani, Toshinori, E-mail: atieinks-toshi@nifty.com [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Sugihara, Naoki, E-mail: naoki.sugihara@toshiba.co.jp [Toshiba Medical Systems Co., 1385 Shimoishigami, Otawara 324-0036 (Japan); Koyama, Hisanobu, E-mail: hkoyama@med.kobe-u.ac.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Kanda, Tomonori, E-mail: k_a@hotmail.co.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Kanata, Naoki, E-mail: takikina12345@yahoo.co.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Murakami, Tohru, E-mail: mura@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Kawamitsu, Hideaki, E-mail: kawamitu@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Sugimura, Kazuro, E-mail: sugimura@med.kobe-u.ac.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan)

    2012-11-15

    Objectives: To assess the effects of image reconstruction method on hepatic CT perfusion (CTP) values using two CT protocols with different radiation doses. Materials and methods: Sixty patients underwent hepatic CTP and were randomly divided into two groups. Tube currents of 210 or 250 mA were used for the standard dose group and 120 or 140 mA for the low dose group. The higher currents were selected for large patients. Demographic features of the groups were compared. CT images were reconstructed by using filtered back projection (FBP), image filter (quantum de-noising, QDS), and adaptive iterative dose reduction (AIDR). Hepatic arterial and portal perfusion (HAP and HPP, ml/min/100 ml) and arterial perfusion fraction (APF, %) were calculated using the dual-input maximum slope method. ROIs were placed on each hepatic segment. Perfusion and Hounsfield unit (HU) values, and image noises (standard deviations of HU value, SD) were measured and compared between the groups and among the methods. Results: There were no significant differences in the demographic features of the groups, nor were there any significant differences in mean perfusion and HU values for either the groups or the image reconstruction methods. Mean SDs of each of the image reconstruction methods were significantly lower (p < 0.0001) for the standard dose group than the low dose group, while mean SDs for AIDR were significantly lower than those for FBP for both groups (p = 0.0006 and 0.013). Radiation dose reductions were approximately 45%. Conclusions: Image reconstruction method did not affect hepatic perfusion values calculated by dual-input maximum slope method with or without radiation dose reductions. AIDR significantly reduced images noises.

  4. Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.

    Science.gov (United States)

    Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques

    2012-06-01

    The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies.

  5. Optical tomography reconstruction algorithm with the finite element method: An optimal approach with regularization tools

    Energy Technology Data Exchange (ETDEWEB)

    Balima, O., E-mail: ofbalima@gmail.com [Département des Sciences Appliquées, Université du Québec à Chicoutimi, 555 bd de l’Université, Chicoutimi, QC, Canada G7H 2B1 (Canada); Favennec, Y. [LTN UMR CNRS 6607 – Polytech’ Nantes – La Chantrerie, Rue Christian Pauc, BP 50609 44 306 Nantes Cedex 3 (France); Rousse, D. [Chaire de recherche industrielle en technologies de l’énergie et en efficacité énergétique (t3e), École de technologie supérieure, 201 Boul. Mgr, Bourget Lévis, QC, Canada G6V 6Z3 (Canada)

    2013-10-15

    Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.

  6. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-01-01

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  7. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  8. Spectrum reconstruction method based on the detector response model calibrated by x-ray fluorescence

    Science.gov (United States)

    Li, Ruizhe; Li, Liang; Chen, Zhiqiang

    2017-02-01

    Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.

  9. Pantograph: A template-based method for genome-scale metabolic model reconstruction.

    Science.gov (United States)

    Loira, Nicolas; Zhukova, Anna; Sherman, David James

    2015-04-01

    Genome-scale metabolic models are a powerful tool to study the inner workings of biological systems and to guide applications. The advent of cheap sequencing has brought the opportunity to create metabolic maps of biotechnologically interesting organisms. While this drives the development of new methods and automatic tools, network reconstruction remains a time-consuming process where extensive manual curation is required. This curation introduces specific knowledge about the modeled organism, either explicitly in the form of molecular processes, or indirectly in the form of annotations of the model elements. Paradoxically, this knowledge is usually lost when reconstruction of a different organism is started. We introduce the Pantograph method for metabolic model reconstruction. This method combines a template reaction knowledge base, orthology mappings between two organisms, and experimental phenotypic evidence, to build a genome-scale metabolic model for a target organism. Our method infers implicit knowledge from annotations in the template, and rewrites these inferences to include them in the resulting model of the target organism. The generated model is well suited for manual curation. Scripts for evaluating the model with respect to experimental data are automatically generated, to aid curators in iterative improvement. We present an implementation of the Pantograph method, as a toolbox for genome-scale model reconstruction, curation and validation. This open source package can be obtained from: http://pathtastic.gforge.inria.fr.

  10. A Method to Reconstruct the Solar-Induced Canopy Fluorescence Spectrum from Hyperspectral Measurements

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2014-10-01

    Full Text Available A method for canopy Fluorescence Spectrum Reconstruction (FSR is proposed in this study, which can be used to retrieve the solar-induced canopy fluorescence spectrum over the whole chlorophyll fluorescence emission region from 640–850 nm. Firstly, the radiance of the solar-induced chlorophyll fluorescence (Fs at five absorption lines of the solar spectrum was retrieved by a Spectral Fitting Method (SFM. The Singular Vector Decomposition (SVD technique was then used to extract three basis spectra from a training dataset simulated by the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes. Finally, these basis spectra were linearly combined to reconstruct the Fs spectrum, and the coefficients of them were determined by Weighted Linear Least Squares (WLLS fitting with the five retrieved Fs values. Results for simulated datasets indicate that the FSR method could accurately reconstruct the Fs spectra from hyperspectral measurements acquired by instruments of high Spectral Resolution (SR and Signal to Noise Ratio (SNR. The FSR method was also applied to an experimental dataset acquired in a diurnal experiment. The diurnal change of the reconstructed Fs spectra shows that the Fs radiance around noon was higher than that in the morning and afternoon, which is consistent with former studies. Finally, the potential and limitations of this method are discussed.

  11. Reconstruction of the sound field above a reflecting plane using the equivalent source method

    Science.gov (United States)

    Bi, Chuan-Xing; Jing, Wen-Qian; Zhang, Yong-Bin; Lin, Wang-Lin

    2017-01-01

    In practical situations, vibrating objects are usually located above a reflecting plane instead of exposing to a free field. The conventional nearfield acoustic holography (NAH) sometimes fails to identify sound sources under such situations. This paper develops two kinds of equivalent source method (ESM)-based half-space NAH to reconstruct the sound field above a reflecting plane. In the first kind of method, the half-space Green's function is introduced into the ESM-based NAH, and the sound field is reconstructed based on the condition that the surface impedance of the reflecting plane is known a prior. The second kind of method regards the reflections as being radiated by equivalent sources placed under the reflecting plane, and the sound field is reconstructed by matching the pressure on the hologram surface with the equivalent sources distributed within the vibrating object and those substituting for reflections. Thus, this kind of method is independent of the surface impedance of the reflecting plane. Numerical simulations and experiments demonstrate the feasibility of these two kinds of methods for reconstructing the sound field above a reflecting plane.

  12. A new optimization approach for source-encoding full-waveform inversion

    NARCIS (Netherlands)

    Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.

    2013-01-01

    Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform

  13. A new optimization approach for source-encoding full-waveform inversion

    NARCIS (Netherlands)

    Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.

    2013-01-01

    Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform

  14. Anatomic and histological characteristics of vagina reconstructed by McIndoe method

    Directory of Open Access Journals (Sweden)

    Kozarski Jefta

    2009-01-01

    Full Text Available Background/Aim. Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH syndrome and compare them with normal vagina. Methods. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau. Comparatively, 21 females of 18 and more years with normal vaginas were also studied. All the subjects were divided into the groups R (reconstructed and C (control and the subgroups according to age up to 30 years (1 R, 1C, from 30 to 50 (2R, 2C, and over 50 (3R, 3C. Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p < 0.05 was considered statistically significant. Results. The results show that there are differences in the depth and the wideness of reconstructed vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. Conclusion. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.

  15. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  16. 3D Ultrasound Reconstruction of Spinal Images using an Improved Olympic Hole-Filling Method

    NARCIS (Netherlands)

    Dewi, D.E.O.; Wilkinson, M.H.F.; Mengko, T.L.R.; Purnama, I.K.E.; Ooijen, P.M.A. van; Veldhuizen, A.G.; Maurits, N.M.; Verkerke, G.J.

    2009-01-01

    We propose a new Hole-filling algorithm by improving the Olympic operator, and we also apply it to generate the volume in our freehand 3D ultrasound reconstruction of the spine. First, the ultrasound frames and position information are compounded into a 3D volume using the Bin-filling method. Then,

  17. Performance Evaluation of Super-Resolution Reconstruction Methods on Real-World Data

    NARCIS (Netherlands)

    Eekeren, A.W.M. van; Schutte, K.; Oudegeest, O.R.; Vliet, L.J. van

    2007-01-01

    The performance of a super-resolution (SR) reconstruction method on real-world data is not easy to measure, especially as a ground-truth (GT) is often not available. In this paper, a quantitative performance measure is used, based on triangle orientation discrimination (TOD). The TOD measure, simula

  18. Reconstruction of nonstationary sound fields based on the time domain plane wave superposition method.

    Science.gov (United States)

    Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude

    2012-10-01

    A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.

  19. A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures

    Energy Technology Data Exchange (ETDEWEB)

    Mangipudi, K.R., E-mail: mangipudi@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Radisch, V., E-mail: vradisch@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Holzer, L., E-mail: holz@zhaw.ch [Züricher Hochschule für Angewandte Wissenschaften, Institute of Computational Physics, Wildbachstrasse 21, CH-8400 Winterthur (Switzerland); Volkert, C.A., E-mail: volkert@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany)

    2016-04-15

    We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. - Highlights: • FIB nanotomography of nanoporous structure with features sizes of ∼40 nm or less. • Accurate determination of individual slice thickness with subpixel precision. • The method preserves surface topography. • Quantitative 3D microstructural analysis of materials with open porosity.

  20. Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR

    Directory of Open Access Journals (Sweden)

    Hou Liying

    2016-10-01

    Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.

  1. Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2009-01-01

    all the pressure histories at the field points, source-observer relations can be constructed in a matrix-vector form for each frequency. By multiplying the measured field data with the pseudo-inverse of the calculated transfer function, one obtains the distribution of source pressure. An omni......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...

  2. A Method to Reconstruct Nth-Order Periodically Nonuniform Sampled Signals

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yao

    2004-01-01

    It is well known that nonuniform sampling is usually needed in special signals processing.In this paper, a general method to reconstruct Nth-order periodically nonuniform sampled signals is presented which is also developed to digital domain, and the designs of the digital filters and the synthesis system are given. This paper extends the studies of Kohlenberg, whose work concentrate on the periodically nonuniform sampling of second order, as well as the studies of A.J. Coulson, J.L.Brown,whose work deal with the problems of two-band signals' Nth-order sampling and reconstruction.

  3. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    Science.gov (United States)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  4. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    Science.gov (United States)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  5. A marked bounding box method for image data reduction and reconstruction of sole patterns

    Science.gov (United States)

    Wang, Xingyue; Wu, Jianhua; Zhao, Qingmin; Cheng, Jian; Zhu, Yican

    2012-01-01

    A novel and efficient method called marked bounding box method based on marching cubes is presented for the point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an index during the process of data reduction and later for use of data reconstruction. The data reconstruction is implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency and novelty of our method. Finally, conclusion and discussions are made.

  6. Performance study of Lagrangian methods: reconstruction of large scale peculiar velocities and baryonic acoustic oscillations

    Science.gov (United States)

    Keselman, J. A.; Nusser, A.

    2017-01-01

    NoAM for "No Action Method" is a framework for reconstructing the past orbits of observed tracers of the large scale mass density field. It seeks exact solutions of the equations of motion (EoM), satisfying initial homogeneity and the final observed particle (tracer) positions. The solutions are found iteratively reaching a specified tolerance defined as the RMS of the distance between reconstructed and observed positions. Starting from a guess for the initial conditions, NoAM advances particles using standard N-body techniques for solving the EoM. Alternatively, the EoM can be replaced by any approximation such as Zel'dovich and second order perturbation theory (2LPT). NoAM is suitable for billions of particles and can easily handle non-regular volumes, redshift space, and other constraints. We implement NoAM to systematically compare Zel'dovich, 2LPT, and N-body dynamics over diverse configurations ranging from idealized high-res periodic simulation box to realistic galaxy mocks. Our findings are (i) Non-linear reconstructions with Zel'dovich, 2LPT, and full dynamics perform better than linear theory only for idealized catalogs in real space. For realistic catalogs, linear theory is the optimal choice for reconstructing velocity fields smoothed on scales {buildrel > over {˜}} 5 h^{-1}{Mpc}.(ii) all non-linear back-in-time reconstructions tested here, produce comparable enhancement of the baryonic oscillation signal in the correlation function.

  7. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    Science.gov (United States)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  8. Rehanging Reynolds at the British Institution: Methods for Reconstructing Ephemeral Displays

    Directory of Open Access Journals (Sweden)

    Catherine Roach

    2016-11-01

    Full Text Available Reconstructions of historic exhibitions made with current technologies can present beguiling illusions, but they also put us in danger of recreating the past in our own image. This article and the accompanying reconstruction explore methods for representing lost displays, with an emphasis on visualizing uncertainty, illuminating process, and understanding the mediated nature of period images. These issues are highlighted in a partial recreation of a loan show held at the British Institution, London, in 1823, which featured the works of Sir Joshua Reynolds alongside continental old masters. This recreation demonstrates how speculative reconstructions can nonetheless shed light on ephemeral displays, revealing powerful visual and conceptual dialogues that took place on the crowded walls of nineteenth-century exhibitions.

  9. Lipofilling as a method of reconstructive treatment of breast cancer patients (а review of literature

    Directory of Open Access Journals (Sweden)

    R. V. Lyubota

    2014-01-01

    Full Text Available Lipofilling is one of the most promising directions in reconstructive surgery in patients with breast cancer (ВС. Although, the first transplantation of autologous adipose tissue was performed in the late nineteenth century, widely used techniques associated with the introduction of liposuction (1980s years, which greatly simplified the fence autologous adipose tissue for subsequent transplantation. In clinical studies were proved efficacy and safety lipofilling for reconstruction in patients with ВС. In this paper presents the current data on the efficacy and safety of lipofilling as the primary or secondary method of reconstructive treatment of patients with ВС.

  10. Attenuation Correction in SPECT during Image Reconstruction using an Inverse Monte Carlo Method: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Shahla Ahmadi

    2011-09-01

    Full Text Available Introduction: The main goal of SPECT imaging is to determine activity distribution inside the organs of the body. However, due to photon attenuation, it is almost impossible to do a quantitative study. In this paper, we suggest a mathematical relationship between activity distribution and its corresponding projections using a transfer matrix. Monte Carlo simulation was used to find a precise transfer matrix including the effects of photon attenuation.  Material and Methods: List mode output of the SIMIND Monte Carlo simulator was used to find the relationship between activity distribution and pixel values in projections. The MLEM iterative reconstruction method was then used to reconstruct the activity distribution from the projections. Attenuation-free projections were also simulated. Reconstructed images from these projections were used as reference images. Our suggested attenuation correction method was evaluated using three different phantom configurations: uniform activity and uniform attenuation phantom, non-uniform activity and non-uniform attenuation phantom, and NCAT torso phantom. The mean pixel values and fits between profiles were used as quantitative parameters. Results: Images free from attenuation-related artifacts were reconstructed by our suggested method. A significant increase in pixel values was found after attenuation correction. Better fits between profiles of the corrected and reference images were also found for all phantom configurations.  Discussion and Conclusion: Using a Monte Carlo method, it is possible to find the most precise relationship between activity distribution and its projections. Therefore, it is possible to create mathematical projections that include the effects of attenuation. This helps to have a more realistic comparison between mathematical and real projections, which is a necessary step for image reconstruction using MLEM. This results in images with much better quantitative accuracy at a cost of

  11. A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Bo Guo

    2016-03-01

    Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.

  12. Common Deficiencies in Existing Methods for Reconstructing High-Resolution Temperature Fields During the Last Millennium

    Science.gov (United States)

    Smerdon, J. E.; Kaplan, A.; Zorita, E.; Gonzalez-Rouco, F. J.; Evans, M. N.

    2009-12-01

    Paleoclimatic reconstructions of hemispheric and global surface temperatures during the last millennium vary significantly in their estimates of decadal-to-centennial variability. Although several estimates are based on spatially-resolved climate field reconstruction (CFR) methods, comparisons have been limited to mean Northern Hemisphere temperatures. Spatial skill is explicitly investigated for four CFR methods using pseudoproxy experiments derived from two millennial-length coupled Atmosphere-Ocean General Circulation Model (AOGCM) simulations. The adopted pseudoproxy network approximates the spatial distribution of a widely used multi-proxy network and the CFRs target annual temperature variability on a 5-degree latitude-longitude grid. Results indicate that the spatial skill of presently available large-scale CFRs depends on proxy type and location, target data, and the employed reconstruction methodology, although there are widespread consistencies in the general performance of all four methods. While results are somewhat sensitive to the ability of the AOGCMs to resolve ENSO and its teleconnections, important areas such as the ocean basins and much of the Southern Hemisphere are reconstructed with particularly poor skill in both model experiments. New high-resolution proxies from poorly sampled regions may be one of the best means of improving estimates of large-scale CFRs of the last millennium.

  13. Waveform Catalog, Extreme Mass Ratio Binary (Capture)

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerically-generated gravitational waveforms for circular inspiral into Kerr black holes. These waveforms were developed using Scott Hughes' black hole perturbation...

  14. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    Science.gov (United States)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  15. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.

  16. Computer and Modernization%Low-dose CT Image Reconstruction Based on Adaptive Kernel Regression Method and Algebraic Reconstruction Technique

    Institute of Scientific and Technical Information of China (English)

    钟志威

    2016-01-01

    针对稀疏角度投影数据CT图像重建问题,TV-ART算法将图像的梯度稀疏先验知识引入代数重建法( ART)中,对分段平滑的图像具有较好的重建效果。但是,该算法在边界重建时会产生阶梯效应,影响重建质量。因此,本文提出自适应核回归函数结合代数重建法的重建算法( LAKR-ART),不仅在边界重建时不会产生阶梯效应,而且对细节纹理重建具有更好的重建效果。最后对shepp-logan标准CT图像和实际CT头颅图像进行仿真实验,并与ART、TV-ART算法进行比较,实验结果表明本文算法有效。%To the problem of sparse angular projection data of CT image reconstruction, TV-ART algorithm introduces the gradient sparse prior knowledge of image to algebraic reconstruction, and the local smooth image gets a better reconstruction effect. How-ever, the algorithm generates step effect when the borders are reconstructed, affecting the quality of the reconstruction. Therefore, this paper proposes an adaptive kernel regression function combined with Algebraic Reconstruction Technique reconstruction algo-rithm ( LAKR-ART) , it does not produce the step effect on the border reconstruction, and has a better effect to detail reconstruc-tion. Finally we use the shepp-logan CT image and the actual CT image to make the simulation experiment, and compare with ART and TV-ART algorithm. The experimental results show the algorithm is of effectiveness.

  17. A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.

    Directory of Open Access Journals (Sweden)

    Yiwen Xu

    Full Text Available Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error, as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections. Accumulated error measures were lower (p < 0.01 for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue

  18. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  19. Multi-group pin power reconstruction method based on colorset form functions

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao

    2009-01-01

    A multi-group pin power reconstruction method that fully exploits nodal information obtained from global coarse mesh solution has been developed.It expands the intra-nodal flux distributions into nonseparable semi-analytic basis functions,and a colorset based form function generating method is proposed,which can accurately model the spectral interaction occurring at assembly interface.To demonstrate its accuracy and applicability to realistic problems,the new method is tested against two benchmark problems,including a mixed-oxide fuel problem.The results show that the new method is comparable in accuracy to fine-mesh methods.

  20. Quantification of wave reflection using peripheral blood pressure waveforms.

    Science.gov (United States)

    Kim, Chang-Sei; Fazeli, Nima; McMurtry, M Sean; Finegan, Barry A; Hahn, Jin-Oh

    2015-01-01

    This paper presents a novel minimally invasive method for quantifying blood pressure (BP) wave reflection in the arterial tree. In this method, two peripheral BP waveforms are analyzed to obtain an estimate of central aortic BP waveform, which is used together with a peripheral BP waveform to compute forward and backward pressure waves. These forward and backward waves are then used to quantify the strength of wave reflection in the arterial tree. Two unique strengths of the proposed method are that 1) it replaces highly invasive central aortic BP and flow waveforms required in many existing methods by less invasive peripheral BP waveforms, and 2) it does not require estimation of characteristic impedance. The feasibility of the proposed method was examined in an experimental swine subject under a wide range of physiologic states and in 13 cardiac surgery patients. In the swine subject, the method was comparable to the reference method based on central aortic BP and flow. In cardiac surgery patients, the method was able to estimate forward and backward pressure waves in the absence of any central aortic waveforms: on the average, the root-mean-squared error between actual versus computed forward and backward pressure waves was less than 5 mmHg, and the error between actual versus computed reflection index was less than 0.03.

  1. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Hybrid Grids

    Energy Technology Data Exchange (ETDEWEB)

    Xiaodong Liu; Lijun Xuan; Hong Luo; Yidong Xia

    2001-01-01

    A reconstructed discontinuous Galerkin (rDG(P1P2)) method, originally introduced for the compressible Euler equations, is developed for the solution of the compressible Navier- Stokes equations on 3D hybrid grids. In this method, a piecewise quadratic polynomial solution is obtained from the underlying piecewise linear DG solution using a hierarchical Weighted Essentially Non-Oscillatory (WENO) reconstruction. The reconstructed quadratic polynomial solution is then used for the computation of the inviscid fluxes and the viscous fluxes using the second formulation of Bassi and Reay (Bassi-Rebay II). The developed rDG(P1P2) method is used to compute a variety of flow problems to assess its accuracy, efficiency, and robustness. The numerical results demonstrate that the rDG(P1P2) method is able to achieve the designed third-order of accuracy at a cost slightly higher than its underlying second-order DG method, outperform the third order DG method in terms of both computing costs and storage requirements, and obtain reliable and accurate solutions to the large eddy simulation (LES) and direct numerical simulation (DNS) of compressible turbulent flows.

  2. Statistical image reconstruction methods in PET with compensation for missing data

    Energy Technology Data Exchange (ETDEWEB)

    Kinahan, P.E. [Univ. of Pittsburgh, PA (United States); Fessler, J.A.; Karp, J.S.

    1996-12-31

    We present the results of combining volume imaging with the PENN-PET scanner with statistical image reconstruction methods such as the penalized weighted least squares (PWLS) method. The goal of this particular combination is to improve both classification and estimation tasks in PET imaging protocols where image quality is dominated by spatially-variant system responses and/or measurement statistics. The PENN-PET scanner has strongly spatially-varying system behavior due to its volume imaging design and the presence of detector gaps. Statistical methods are easily adapted to this scanner geometry, including the detector gaps, and have also been shown to have improved bias/variance trade-offs compared to the standard filtered-backprojection (FBP) reconstruction method. The PWLS method requires fewer iterations and may be more tolerant of errors in the system model than other statistical methods. We present results demonstrating the improvement in image quality for PWLS image reconstructions of data from the PENN-PET scanner.

  3. The Multiple Waveform Persistent Peak (MWaPP) Retracker for SAR waveforms

    DEFF Research Database (Denmark)

    Villadsen, Heidi; Andersen, Ole Baltazar; Stenseng, Lars

    using CryoSat-2 20Hz SAR data, but due to the similarities between the Sentinel-3 SRAL altimeter and the SIRAL altimeter on-board CryoSat-2 an adaption of the method will be straightforward. The MWaPP retracker is based on a sub-waveform retracker, but takes the shape of adjacent waveforms into account...... before selecting the sub-waveform belonging to nadir. This is new compared to primary peak retrackers, and alleviates a lot of snagging due to off-nadir bright targets, but also topography challenges. The results from the MWaPP retracker show a significant decrease in the standard deviation of the mean...

  4. Reconstructing paleo- and initial landscapes using a multi-method approach in hummocky NE Germany

    Science.gov (United States)

    van der Meij, Marijn; Temme, Arnaud; Sommer, Michael

    2016-04-01

    The unknown state of the landscape at the onset of soil and landscape formation is one of the main sources of uncertainty in landscape evolution modelling. Reconstruction of these initial conditions is not straightforward due to the problems of polygenesis and equifinality: different initial landscapes can change through different sets of processes to an identical end state. Many attempts have been done to reconstruct this initial landscape. These include remote sensing, reverse modelling and the usage of soil properties. However, each of these methods is only applicable on a certain spatial scale and comes with its own uncertainties. Here we present a new framework and preliminary results of reconstructing paleo-landscapes in an eroding setting, where we combine reverse modelling, remote sensing, geochronology, historical data and present soil data. With the combination of these different approaches, different spatial scales can be covered and the uncertainty in the reconstructed landscape can be reduced. The study area is located in north-east Germany, where the landscape consists of a collection of small local depressions, acting as closed catchments. This postglacial hummocky landscape is suitable to test our new multi-method approach because of several reasons: i) the closed catchments enable a full mass balance of erosion and deposition, due to the collection of colluvium in these depressions, ii) significant topography changes only started recently with medieval deforestation and recent intensification of agriculture and iii) due to extensive previous research a large dataset is readily available.

  5. Landscapes of human evolution: models and methods of tectonic geomorphology and the reconstruction of hominin landscapes.

    Science.gov (United States)

    Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P

    2011-03-01

    This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats.

  6. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-08-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0

  7. A New Design Method of Low Sidelobe Level LFM Noise Radar Waveform%一种新的低旁瓣 LFM 噪声雷达波形设计方法

    Institute of Scientific and Technical Information of China (English)

    李秀友; 董云龙; 张林; 关键

    2016-01-01

    In order to solve the issue of high range sidelobe level of LFM noise radar waveform, a new design method of low sidelobe level LFM noise radar waveform is presented, which is a combination of low sidelobes level waveform design method and LFM noise radar waveform design method. Firstly, the objective function of the low sidelobes level optimization problem is established, and the relation between the quadratic phase factor and random phase factor is used as constraint functions. Then, to solve the optimization problem with constraint functions, Modified Cycle Algorithm New (MCAN) is proposed, which can be solved by iterative algorithm. Finally, simulation results show that this algorithm can effectively suppress range-Doppler sidelobe level, and keep excellent performance in stationary targets and movement targets scenario, it also possesses low probability of intercept.%针对 LFM 噪声雷达波形旁瓣功率水平高的问题,该文将低旁瓣波形设计方法和 LFM 噪声雷达波形设计方法相结合,提出一种新的低旁瓣 LFM 噪声雷达波形设计方法。该方法首先建立低旁瓣 LFM 噪声雷达波形设计目标函数,将确定性二次相位和随机相位的组合关系转化为优化问题的约束条件,然后通过该文提出的修正循环算法(MCAN)迭代求解,使得设计的恒模 LFM 噪声波形同时具有低旁瓣和高多普勒容忍性。最后,仿真结果表明该算法能够降低波形模糊函数的距离-多普勒2维旁瓣,对静止目标和运动目标均能够起到较好的效果,且保证了波形的低截获概率性能。

  8. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Jardak, Seifallah

    2014-09-01

    Correlated waveforms have a number of applications in different fields, such as radar and communication. It is very easy to generate correlated waveforms using infinite alphabets, but for some of the applications, it is very challenging to use them in practice. Moreover, to generate infinite alphabet constant envelope correlated waveforms, the available research uses iterative algorithms, which are computationally very expensive. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method map the Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability-density-function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. To generate equiprobable symbols, the area of each region is kept same. If the requirement is to have each symbol with its own unique probability, the proposed scheme allows us that as well. Although, the proposed scheme is general, the main focus of this paper is to generate finite alphabet waveforms for multiple-input multiple-output radar, where correlated waveforms are used to achieve desired beampatterns. © 2014 IEEE.

  9. An infrared image super-resolution reconstruction method based on compressive sensing

    Science.gov (United States)

    Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei

    2016-05-01

    Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.

  10. [An improvement on the two-dimensional convolution method of image reconstruction and its application to SPECT].

    Science.gov (United States)

    Suzuki, S; Arai, H

    1990-04-01

    In single-photon emission computed tomography (SPECT) and X-ray CT one-dimensional (1-D) convolution method is used for their image reconstruction from projections. The method makes a 1-D convolution filtering on projection data with a 1-D filter in the space domain, and back projects the filtered data for reconstruction. Images can also be reconstructed by first forming the 2-D backprojection images from projections and then convoluting them with a 2-D space-domain filter. This is the reconstruction by the 2-D convolution method, and it has the opposite reconstruction process to the 1-D convolution method. Since the 2-D convolution method is inferior to the 1-D convolution method in speed in reconstruction, it has no practical use. In the actual reconstruction by the 2-D convolution method, convolution is made on a finite plane which is called convolution window. A convolution window of size N X N needs a 2-D discrete filter of the same size. If better reconstructions are achieved with small convolution windows, the reconstruction time for the 2-D convolution method can be reduced. For this purpose, 2-D filters of a simple function form are proposed which can give good reconstructions with small convolution windows. They are here defined on a finite plane, depending on the window size used, although a filter function is usually defined on the infinite plane. They are however set so that they better approximate the property of a 2-D filter function defined on the infinite plane. Filters of size N X N are thus determined. Their value varies with window size. The filters are applied to image reconstructions of SPECT.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Anisotropic wave-equation traveltime and waveform inversion

    KAUST Repository

    Feng, Shihang

    2016-09-06

    The wave-equation traveltime and waveform inversion (WTW) methodology is developed to invert for anisotropic parameters in a vertical transverse isotropic (VTI) meidum. The simultaneous inversion of anisotropic parameters v0, ε and δ is initially performed using the wave-equation traveltime inversion (WT) method. The WT tomograms are then used as starting background models for VTI full waveform inversion. Preliminary numerical tests on synthetic data demonstrate the feasibility of this method for multi-parameter inversion.

  12. Methods for the reconstruction of large scale anisotropies of the cosmic ray flux

    Energy Technology Data Exchange (ETDEWEB)

    Over, Sven

    2010-01-15

    In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)

  13. Use of experimental design to optimize a triple-potential waveform to develop a method for the determination of streptomycin and dihydrostreptomycin in pharmaceutical veterinary dosage forms by HPLC-PAD.

    Science.gov (United States)

    Martínez-Mejía, Mónica J; Rath, Susanne

    2015-02-01

    An HPLC-PAD method using a gold working electrode and a triple-potential waveform was developed for the simultaneous determination of streptomycin and dihydrostreptomycin in veterinary drugs. Glucose was used as the internal standard, and the triple-potential waveform was optimized using a factorial and a central composite design. The optimum potentials were as follows: amperometric detection, E1=-0.15V; cleaning potential, E2=+0.85V; and reactivation of the electrode surface, E3=-0.65V. For the separation of the aminoglycosides and the internal standard of glucose, a CarboPac™ PA1 anion exchange column was used together with a mobile phase consisting of a 0.070 mol L(-1) sodium hydroxide solution in the isocratic elution mode with a flow rate of 0.8 mL min(-1). The method was validated and applied to the determination of streptomycin and dihydrostreptomycin in veterinary formulations (injection, suspension and ointment) without any previous sample pretreatment, except for the ointments, for which a liquid-liquid extraction was required before HPLC-PAD analysis. The method showed adequate selectivity, with an accuracy of 98-107% and a precision of less than 3.9%.

  14. Quantitative Monitoring for Enhanced Geothermal Systems Using Double-Difference Waveform Inversion with Spatially-Variant Total-Variation Regularization

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a promising tool for quantitative monitoring for enhanced geothermal systems (EGS). The method uses time-lapse seismic data to jointly inverts for reservoir changes. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using timelapse seismic reflection data. To improve reconstruction, we develop a spatially-variant total-variation regularization scheme into double-difference waveform inversion to improve the inversion accuracy and robustness. The new regularization scheme employs different regularization parameters in different regions of the model to obtain an optimal regularization in each area. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter. Utilizing a spatially-variant regularization scheme, the target monitoring regions are well reconstructed and the image noise is significantly reduced outside the monitoring regions. Our numerical examples demonstrate that the spatially-variant total-variation regularization scheme provides the flexibility to regularize local regions based on the a priori spatial information without increasing computational costs and the computer memory requirement.

  15. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Govindaswamy, Priya [Oak Ridge National Laboratory (ORNL); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Abramoff, M.D. [University of Iowa

    2008-01-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  16. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL

    2009-09-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  17. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    Science.gov (United States)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  18. A Reconstruction Method of Blood Flow Velocity in Left Ventricle Using Color Flow Ultrasound

    Directory of Open Access Journals (Sweden)

    Jaeseong Jang

    2015-01-01

    Full Text Available Vortex flow imaging is a relatively new medical imaging method for the dynamic visualization of intracardiac blood flow, a potentially useful index of cardiac dysfunction. A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color flow images compiled from ultrasound measurements. In this paper, a 2D incompressible Navier-Stokes equation with a mass source term is proposed to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. The boundary conditions to solve the system of equations are derived from the dimensions of the ventricle extracted from 2D echocardiography data. The performance of the proposed method is evaluated numerically using synthetic flow data acquired from simulating left ventricle flows. The numerical simulations show the feasibility and potential usefulness of the proposed method of reconstructing the intracardiac flow fields. Of particular note is the finding that the mass source term in the proposed model improves the reconstruction performance.

  19. Principal components analysis of Laplacian waveforms as a generic method for identifying ERP generator patterns: II. Adequacy of low-density estimates.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E

    2006-02-01

    To evaluate the comparability of high- and low-density surface Laplacian estimates for determining ERP generator patterns of group data derived from a typical ERP sample size and paradigm. High-density ERP data (129 sites) recorded from 17 adults during tonal and phonetic oddball tasks were converted to a 10-20-system EEG montage (31 sites) using spherical spline interpolations. Current source density (CSD) waveforms were computed from the high- and low-density, but otherwise identical, ERPs, and correlated at corresponding locations. CSD data were submitted to separate covariance-based, unrestricted temporal PCAs (Varimax of covariance loadings) to identify and effectively summarize temporally and spatially overlapping CSD components. Solutions were compared by correlating factor loadings and scores, and by plotting ANOVA F statistics derived from corresponding high- and low-resolution factor scores using representative sites. High- and low-density CSD waveforms, PCA solutions, and F statistics were remarkably similar, yielding correlations of .9 91.6%). Low-density surface Laplacian estimates were shown to be accurate approximations of high-density CSDs at these locations, which adequately and quite sufficiently summarized group data. Moreover, reasonable approximations of many high-density scalp locations were obtained for group data from interpolations of low-density data. If group findings are the primary objective, as typical for cognitive ERP research, low-resolution CSD topographies may be as efficient, given the effective spatial smoothing when averaging across subjects and/or conditions. Conservative recommendations for restricting surface Laplacians to high-density recordings may not be appropriate for all ERP research applications, and should be re-evaluated considering objective, costs and benefits.

  20. STRS Compliant FPGA Waveform Development

    Science.gov (United States)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  1. Advanced Waveform Simulation for Seismic Monitoring

    Science.gov (United States)

    2008-09-01

    velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radial components), Rayleigh (vertical and...ranges out to 10°, including extensive observations of crustal thinning and thickening and various Pnl complexities. Broadband modeling in 1D, 2D...existing models perform in predicting the various regional phases, Rayleigh waves, Love waves, and Pnl waves. Previous events from this Basin-and-Range

  2. Reconstruction of 3D structure using stochastic methods: morphology and transport properties

    Science.gov (United States)

    Karsanina, Marina; Gerke, Kirill; Čapek, Pavel; Vasilyev, Roman; Korost, Dmitry; Skvortsova, Elena

    2013-04-01

    One of the main factors defining numerous flow phenomena in rocks, soils and other porous media, including fluid and solute movements, is pore structure, e.g., pore sizes and their connectivity. Numerous numerical methods were developed to quantify single and multi-phase flow in such media on microscale. Among most popular ones are: 1) a wide range of finite difference/element/volume solutions of Navier-Stokes equations and its simplifications; 2) lattice-Boltzmann method; 3) pore-network models, among others. Each method has some advantages and shortcomings, so that different research teams usually utilize more than one, depending on the study case. Recent progress in 3D imaging of internal structure, e.g., X-ray tomography, FIB-SEM and confocal microscopy, made it possible to obtain digitized input pore parameters for such models, however, a trade-off between resolution and sample size is usually unavoidable. There are situations then only standard two-dimensional information of porous structure is known due to tomography high cost or resolution limitations. However, physical modeling on microscale requires 3D information. There are three main approaches to reconstruct (using 2D cut(s) or some other limited information/properties) porous media: 1) statistical methods (correlation functions and simulated annealing, multi-point statistics, entropy methods), 2) sequential methods (sphere or other granular packs) and 3) morphological methods. Stochastic reconstructions using correlation functions possess some important advantage - they provide a statistical description of the structure, which is known to have relationships with all physical properties. In addition, this method is more flexible for other applications to characterize porous media. Taking different 3D scans of natural and artificial porous materials (sandstones, soils, shales, ceramics) we choose some 2D cut/s as sources of input correlation functions. Based on different types of correlation functions

  3. An Optimized Method for PDEs-Based Geometric Modeling and Reconstruction

    Directory of Open Access Journals (Sweden)

    Chuanjun Wang

    2012-09-01

    Full Text Available This study presents an optimized method for efficient geometric modeling and reconstruction using Partial Differential Equations (PDEs. Based on the identification between the analytic solution of Bloor Wilson PDE and the Fourier series, we transform the problem of model selection for PDEs-based geometric modeling into the problem of significant frequencies selection from Fourier series. With the significance analysis of the Fourier series, a model selection and an iterative surface fitting algorithm are applied to address the problem of overfitting and underfitting in the PDEs-based geometric modeling and reconstruction. Simulations are conducted on both the computer generated geometric surface and the laser scanned 3D face data. Experiment results show the merits of the proposed method.

  4. Use of maximum entropy method with parallel processing machine. [for x-ray object image reconstruction

    Science.gov (United States)

    Yin, Lo I.; Bielefeld, Michael J.

    1987-01-01

    The maximum entropy method (MEM) and balanced correlation method were used to reconstruct the images of low-intensity X-ray objects obtained experimentally by means of a uniformly redundant array coded aperture system. The reconstructed images from MEM are clearly superior. However, the MEM algorithm is computationally more time-consuming because of its iterative nature. On the other hand, both the inherently two-dimensional character of images and the iterative computations of MEM suggest the use of parallel processing machines. Accordingly, computations were carried out on the massively parallel processor at Goddard Space Flight Center as well as on the serial processing machine VAX 8600, and the results are compared.

  5. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    Science.gov (United States)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  6. Reconstruction of three-dimensional grain structure in polycrystalline iron via an interactive segmentation method

    Science.gov (United States)

    Feng, Min-nan; Wang, Yu-cong; Wang, Hao; Liu, Guo-quan; Xue, Wei-hua

    2017-03-01

    Using a total of 297 segmented sections, we reconstructed the three-dimensional (3D) structure of pure iron and obtained the largest dataset of 16254 3D complete grains reported to date. The mean values of equivalent sphere radius and face number of pure iron were observed to be consistent with those of Monte Carlo simulated grains, phase-field simulated grains, Ti-alloy grains, and Ni-based super alloy grains. In this work, by finding a balance between automatic methods and manual refinement, we developed an interactive segmentation method to segment serial sections accurately in the reconstruction of the 3D microstructure; this approach can save time as well as substantially eliminate errors. The segmentation process comprises four operations: image preprocessing, breakpoint detection based on mathematical morphology analysis, optimized automatic connection of the breakpoints, and manual refinement by artificial evaluation.

  7. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  8. A reconstruction method of intra-ventricular blood flow using color flow ultrasound: a simulation study

    Science.gov (United States)

    Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun

    2015-03-01

    A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.

  9. An efficient reconstruction method for bioluminescence tomography based on two-step iterative shrinkage approach

    Science.gov (United States)

    Guo, Wei; Jia, Kebin; Tian, Jie; Han, Dong; Liu, Xueyan; Wu, Ping; Feng, Jinchao; Yang, Xin

    2012-03-01

    Among many molecular imaging modalities, Bioluminescence tomography (BLT) is an important optical molecular imaging modality. Due to its unique advantages in specificity, sensitivity, cost-effectiveness and low background noise, BLT is widely studied for live small animal imaging. Since only the photon distribution over the surface is measurable and the photo propagation with biological tissue is highly diffusive, BLT is often an ill-posed problem and may bear multiple solutions and aberrant reconstruction in the presence of measurement noise and optical parameter mismatches. For many BLT practical applications, such as early detection of tumors, the volumes of the light sources are very small compared with the whole body. Therefore, the L1-norm sparsity regularization has been used to take advantage of the sparsity prior knowledge and alleviate the ill-posedness of the problem. Iterative shrinkage (IST) algorithm is an important research achievement in a field of compressed sensing and widely applied in sparse signal reconstruction. However, the convergence rate of IST algorithm depends heavily on the linear operator. When the problem is ill-posed, it becomes very slow. In this paper, we present a sparsity regularization reconstruction method for BLT based on the two-step iterated shrinkage approach. By employing Two-step strategy of iterative reweighted shrinkage (IRS) to improve IST, the proposed method shows faster convergence rate and better adaptability for BLT. The simulation experiments with mouse atlas were conducted to evaluate the performance of proposed method. By contrast, the proposed method can obtain the stable and comparable reconstruction solution with less number of iterations.

  10. Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method

    Energy Technology Data Exchange (ETDEWEB)

    Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B. [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30318 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States)

    2012-09-15

    Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on

  11. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Ahmed, Sajid

    2016-01-13

    Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.

  12. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Vladimir V. Lyubimov

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.

  13. Three-dimensional reconstruction methods in Single Particle Analysis from transmission electron microscopy data.

    Science.gov (United States)

    Carazo, J M; Sorzano, C O S; Otón, J; Marabini, R; Vargas, J

    2015-09-01

    The Transmission Electron Microscope provides two-dimensional (2D) images of the specimens under study. However, the architecture of these specimens is defined in a three-dimensional (3D) coordinate space, in volumetric terms, making the direct microscope output somehow "short" in terms of dimensionality. This situation has prompted the development of methods to quantitatively estimate 3D volumes from sets of 2D images, which are usually referred to as "three-dimensional reconstruction methods". These 3D reconstruction methods build on four considerations: (1) The relationship between the 2D images and the 3D volume must be of a particularly simple type, (2) many 2D images are needed to gain 3D volumetric information, (3) the 2D images and the 3D volume have to be in the same coordinate reference frame and (4), in practical terms, the reconstructed 3D volume will only be an approximation to the original 3D volume which gave raise to the 2D projections. In this work we will adopt a quite general view, trying to address a large community of interested readers, although some sections will be particularly devoted to the 3D analysis of isolated macromolecular complexes in the application area normally referred to as Single Particle Analysis (SPA).

  14. A DATA DRIVEN METHOD FOR BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    M. Sajadian

    2014-10-01

    Full Text Available Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.

  15. Reconstruction methods for sound visualization based on acousto-optic tomography

    DEFF Research Database (Denmark)

    Torras Rosell, Antoni; Lylloff, Oliver; Barrera Figueroa, Salvador;

    2013-01-01

    The visualization of acoustic fields using acousto-optic tomography has recently proved to yield satisfactory results in the audible frequency range. The current implementation of this visualization technique uses a laser Doppler vibrometer (LDV) to measure the acousto-optic effect, that is...... tomographic techniques. The filtered back projection (FBP) method is the most popular reconstruction algorithm used for tomography in many fields of science. The present study takes the performance of the FBP method in sound visualization as a reference and investigates the use of alternative methods commonly...

  16. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    Science.gov (United States)

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  17. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Scott E. Field

    2014-07-01

    Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in

  18. Algebraic reconstruction combined with the signal space separation method for the inverse magnetoencephalography problem with a dipole-quadrupole source

    Science.gov (United States)

    Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.

    2014-05-01

    This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.

  19. Full Elastic Waveform Search Engine for Near Surface Imaging

    Science.gov (United States)

    Zhang, J.; Zhang, X.

    2014-12-01

    For processing land seismic data, the near-surface problem is often very complex and may severely affect our capability to image the subsurface. The current state-of-the-art technology for near surface imaging is the early arrival waveform inversion that solves an acoustic wave-equation problem. However, fitting land seismic data with acoustic wavefield is sometimes invalid. On the other hand, performing elastic waveform inversion is very time-consuming. Similar to a web search engine, we develop a full elastic waveform search engine that includes a large database with synthetic elastic waveforms accounting for a wide range of interval velocity models in the CMP domain. With each CMP gather of real data as an entry, the search engine applies Multiple-Randomized K-Dimensional (MRKD) tree method to find approximate best matches to the entry in about a second. Interpolation of the velocity models at CMP positions creates 2D or 3D Vp, Vs, and density models for the near surface area. The method does not just return one solution; it gives a series of best matches in a solution space. Therefore, the results can help us to examine the resolution and nonuniqueness of the final solution. Further, this full waveform search method can avoid the issues of initial model and cycle skipping that the method of full waveform inversion is difficult to deal with.

  20. Validation of a laboratory method for evaluating dynamic properties of reconstructed equine racetrack surfaces.

    Directory of Open Access Journals (Sweden)

    Jacob J Setterbo

    Full Text Available BACKGROUND: Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior. OBJECTIVE: To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties. METHODS: Track-testing device (TTD impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression. RESULTS: Most dynamic surface property setting differences (racetrack-laboratory were small relative to surface material type differences (dirt-synthetic. Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces. CONCLUSIONS: Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof

  1. LISA parameter estimation using numerical merger waveforms

    Energy Technology Data Exchange (ETDEWEB)

    Thorpe, J I; McWilliams, S T; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G, E-mail: James.I.Thorpe@nasa.go [NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771 (United States)

    2009-05-07

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10{sup 6} M{sub o-dot} at a redshift of z approx 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  2. LISA parameter estimation using numerical merger waveforms

    CERN Document Server

    Thorpe, J I; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G

    2008-01-01

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of one million Solar masses at a redshift of one were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  3. Reconstruction of RHESSI Solar Flare Images with a Forward Fitting Method

    Science.gov (United States)

    Aschwanden, Markus J.; Schmahl, Ed; RHESSI Team

    2002-11-01

    We describe a forward-fitting method that has been developed to reconstruct hard X-ray images of solar flares from the Ramaty High-Energy Solar Spectroscopic Imager (RHESSI), a Fourier imager with rotation-modulated collimators that was launched on 5 February 2002. The forward-fitting method is based on geometric models that represent a spatial map by a superposition of multiple source structures, which are quantified by circular gaussians (4 parameters per source), elliptical gaussians (6 parameters), or curved ellipticals (7 parameters), designed to characterize real solar flare hard X-ray maps with a minimum number of geometric elements. We describe and demonstrate the use of the forward-fitting algorithm. We perform some 500 simulations of rotation-modulated time profiles of the 9 RHESSI detectors, based on single and multiple source structures, and perform their image reconstruction. We quantify the fidelity of the image reconstruction, as function of photon statistics, and the accuracy of retrieved source positions, widths, and fluxes. We outline applications for which the forward-fitting code is most suitable, such as measurements of the energy-dependent altitude of energy loss near the limb, or footpoint separation during flares.

  4. Using image reconstruction methods to enhance gridded resolutionfor a newly calibrated passive microwave climate data record

    Science.gov (United States)

    Paget, A. C.; Brodzik, M. J.; Gotberg, J.; Hardman, M.; Long, D. G.

    2014-12-01

    Spanning over 35 years of Earth observations, satellite passive microwave sensors have generated a near-daily, multi-channel brightness temperature record of observations. Critical to describing and understanding Earth system hydrologic and cryospheric parameters, data products derived from the passive microwave record include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. While swath data are valuable to oceanographers due to the temporal scales of ocean phenomena, gridded data are more valuable to researchers interested in derived parameters at fixed locations through time and are widely used in climate studies. We are applying recent developments in image reconstruction methods to produce a systematically reprocessed historical time series NASA MEaSUREs Earth System Data Record, at higher spatial resolutions than have previously been available, for the entire SMMR, SSM/I-SSMIS and AMSR-E record. We take advantage of recently released, recalibrated SSM/I-SSMIS swath format Fundamental Climate Data Records. Our presentation will compare and contrast the two candidate image reconstruction techniques we are evaluating: Backus-Gilbert (BG) interpolation and a radiometer version of Scatterometer Image Reconstruction (SIR). Both BG and SIR use regularization to trade off noise and resolution. We discuss our rationale for the respective algorithm parameters we have selected, compare results and computational costs, and include prototype SSM/I images at enhanced resolutions of up to 3 km. We include a sensitivity analysis for estimating sensor measurement response functions critical to both methods.

  5. A novel reconstruction method for giant incisional hernia: Hybrid laparoscopic technique

    Directory of Open Access Journals (Sweden)

    G Ozturk

    2015-01-01

    Full Text Available Background and Objectives: Laparoscopic reconstruction of ventral hernia is a popular technique today. Patients with large defects have various difficulties of laparoscopic approach. In this study, we aimed to present a new reconstruction technique that combines laparoscopic and open approach in giant incisional hernias. Materials and Methods: Between January 2006 and August 2012, 28 patients who were operated consequently for incisional hernia with defect size over 10 cm included in this study and separated into two groups. Group 1 (n = 12 identifies patients operated with standard laparoscopic approach, whereas group 2 (n = 16 labels laparoscopic technique combined with open approach. Patients were evaluated in terms of age, gender, body mass index (BMI, mean operation time, length of hospital stay, surgical site infection (SSI and recurrence rate. Results: There are 12 patients in group 1 and 16 patients in group 2. Mean length of hospital stay and SSI rates are similar in both groups. Postoperative seroma formation was observed in six patients for group 1 and in only 1 patient for group 2. Group 1 had 1 patient who suffered from recurrence where group 2 had no recurrence. Discussion: Laparoscopic technique combined with open approach may safely be used as an alternative method for reconstruction of giant incisional hernias.

  6. Research on image matching method of big data image of three-dimensional reconstruction

    Science.gov (United States)

    Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong

    2015-12-01

    Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.

  7. Pelvic support femoral reconstruction using the method of Ilizarov: a case report.

    Science.gov (United States)

    Samchukov, M L; Birch, J G

    1992-01-01

    A 15-year-old boy presented with a fixed, irreducible congenital dislocation of the hip associated with other multiple lower extremity growth disturbances secondary to neonatal multifocal osteomyelitis. The affected hip had very limited abduction, and the patient had a very severe Trendelenburg gait secondary to the dislocation. The hip was reconstructed according to the Ilizarov method, by a combination of maximum proximal femoral adduction osteotomy in the subtrochanteric region and distal femoral corticotomy, to permit the gradual realignment of the knee into the new weight-bearing axis produced by the proximal osteotomy. Total fixation time for the femoral reconstruction was two months. Five months after removal of the apparatus, the patient was returned to full function with a remarkably improved gait.

  8. Two-Level Bregman Method for MRI Reconstruction with Graph Regularized Sparse Coding

    Institute of Scientific and Technical Information of China (English)

    刘且根; 卢红阳; 张明辉

    2016-01-01

    In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and up-dates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.

  9. Prediction of a reconstructed α-boron (111) surface by the minima hopping method

    Science.gov (United States)

    Amsler, Maximilian; Goedecker, Stefan; Botti, Silvana; Marques, Miguel A. L.

    2014-03-01

    Boron exhibits an impressive structural variety and immense efforts have recently been made to explore boron structures of low dimensionality, such as boron fullerenes, two-dimensional boron sheets or boron nanotubes which are theoretically predicted to exhibit superior electronic properties compared to their carbon analogues. By performing an extensive and systematic ab initio structural search for the (111) surface of α-boron (111) using the minima hopping structure prediction method we found very strong reconstructions that lead to two-dimensional surface layers. The topmost layer of these low energy reconstructions is a conductive, nearly perfectly planar boron sheet. If exfoliation was experimentally possible, promising precursors for a large variety of boron nano-structures such as single walled boron nanotubes and boron fullerenes could be obtained.

  10. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    Science.gov (United States)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  11. Accuracy Assessment for Multi-Channel ECG Waveforms Using Soft Computing Methodologies

    Directory of Open Access Journals (Sweden)

    Menta Srinivasulu

    2014-07-01

    Full Text Available ECG waveform rhythmic analysis is very important. In recent trends, analysis processes of ECG waveform applications are available in smart devices. Still, existing methods are not able to accomplish the complete accuracy assessment while classify the multi-channel ECG waveforms. In this paper, proposed analysis of accuracy assessment of the classification of multi-channel ECG waveforms using most popular Soft Computing algorithms. In this research, main focus is on the better rule generation to analyze the multi-channel ECG waveforms. Analysis is mainly done inSoft Computing methods like the Decision Trees with different pruning analysis, Logistic Model Trees with different regression process and Support Vector Machine with Particle Swarm Optimization (SVM-PSO. All these analysis methods are trained and tested with MIT-BIH 12 channel ECG waveforms. Before trained these methods, MSO-FIR filter should be used as data preprocessing for removal of noise from original multi-channel ECG waveforms. MSO technique is used for automatically finding out the cutoff frequency of multichannel ECG waveforms which is used in low-pass filtering process. The classification performance is discussed using mean squared error, member function, classification accuracy, complexity of design, and area under curve on MIT-BIH data. Additionally, this research work is extended for the samples of multi-channel ECG waveforms from the Scope diagnostic center, Hyderabad. Our study assets the best process using the Soft Computing methods for analysis of multi-channel ECG waveforms.

  12. Workflow for near-surface velocity automatic estimation: Source-domain full-traveltime inversion followed by waveform inversion

    KAUST Repository

    Liu, Lu

    2017-08-17

    This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source-domain FTI is capable of automatically generating a background velocity that can kinematically match the reconstructed plane-wave sources of early arrivals with true plane-wave sources. This method does not require picking first arrivals for inversion, which is one of the most challenging aspects of ray-based first-arrival tomographic inversion. Moreover, compared with conventional Born-based methods, source-domain FTI can distinguish between slower or faster initial model errors via providing the correct sign of the model gradient. In addition, this method does not need estimation of the source wavelet, which is a requirement for receiver-domain wave-equation velocity inversion. The model derived from source-domain FTI is then used as input to early-arrival waveform inversion to obtain the short-wavelength velocity components. We have tested the workflow on synthetic and field seismic data sets. The results show source-domain FTI can generate reasonable background velocities for early-arrival waveform inversion even when subsurface velocity reversals are present and the workflow can produce a high-resolution near-surface velocity model.

  13. Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.

    Science.gov (United States)

    Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias

    2015-04-01

    Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data.

  14. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    Directory of Open Access Journals (Sweden)

    J. Ray

    2014-08-01

    Full Text Available We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP, to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP to impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2 emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.

  15. Validation of the stream function method used for reconstruction of experimental ionospheric convection patterns

    Directory of Open Access Journals (Sweden)

    P.L. Israelevich

    Full Text Available In this study we test a stream function method suggested by Israelevich and Ershkovich for instantaneous reconstruction of global, high-latitude ionospheric convection patterns from a limited set of experimental observations, namely, from the electric field or ion drift velocity vector measurements taken along two polar satellite orbits only. These two satellite passes subdivide the polar cap into several adjacent areas. Measured electric fields or ion drifts can be considered as boundary conditions (together with the zero electric potential condition at the low-latitude boundary for those areas, and the entire ionospheric convection pattern can be reconstructed as a solution of the boundary value problem for the stream function without any preliminary information on ionospheric conductivities. In order to validate the stream function method, we utilized the IZMIRAN electrodynamic model (IZMEM recently calibrated by the DMSP ionospheric electrostatic potential observations. For the sake of simplicity, we took the modeled electric fields along the noon-midnight and dawn-dusk meridians as the boundary conditions. Then, the solution(s of the boundary value problem (i.e., a reconstructed potential distribution over the entire polar region is compared with the original IZMEM/DMSP electric potential distribution(s, as well as with the various cross cuts of the polar cap. It is found that reconstructed convection patterns are in good agreement with the original modelled patterns in both the northern and southern polar caps. The analysis is carried out for the winter and summer conditions, as well as for a number of configurations of the interplanetary magnetic field.

    Key words: Ionosphere (electric fields and currents; plasma convection; modelling and forecasting

  16. Application of accelerated acquisition and highly constrained reconstruction methods to MR

    Science.gov (United States)

    Wang, Kang

    2011-12-01

    There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has

  17. Impact of reconstruction methods and pathological factors on survival after pancreaticoduodenectomy

    Directory of Open Access Journals (Sweden)

    Salah Binziad

    2013-01-01

    Full Text Available Background: Surgery remains the mainstay of therapy for pancreatic head (PH and periampullary carcinoma (PC and provides the only chance of cure. Improvements of surgical technique, increased surgical experience and advances in anesthesia, intensive care and parenteral nutrition have substantially decreased surgical complications and increased survival. We evaluate the effects of reconstruction type, complications and pathological factors on survival and quality of life. Materials and Methods: This is a prospective study to evaluate the impact of various reconstruction methods of the pancreatic remnant after pancreaticoduodenectomy and the pathological characteristics of PC patients over 3.5 years. Patient characteristics and descriptive analysis in the three variable methods either with or without stent were compared with Chi-square test. Multivariate analysis was performed with the logistic regression analysis test and multinomial logistic regression analysis test. Survival rate was analyzed by use Kaplan-Meier test. Results: Forty-one consecutive patients with PC were enrolled. There were 23 men (56.1% and 18 women (43.9%, with a median age of 56 years (16 to 70 years. There were 24 cases of PH cancer, eight cases of PC, four cases of distal CBD cancer and five cases of duodenal carcinoma. Nine patients underwent duct-to-mucosa pancreatico jejunostomy (PJ, 17 patients underwent telescoping pancreatico jejunostomy (PJ and 15 patients pancreaticogastrostomy (PG. The pancreatic duct was stented in 30 patients while in 11 patients, the duct was not stented. The PJ duct-to-mucosa caused significantly less leakage, but longer operative and reconstructive times. Telescoping PJ was associated with the shortest hospital stay. There were 5 postoperative mortalities, while postoperative morbidities included pancreatic fistula-6 patients, delayed gastric emptying in-11, GI fistula-3, wound infection-12, burst abdomen-6 and pulmonary infection-2. Factors

  18. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    Science.gov (United States)

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  19. Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods.

    Science.gov (United States)

    Kim, J H K; Pullan, A J; Cheng, L K

    2012-08-21

    One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.

  20. Super-resolution image reconstruction methods applied to GFE-referenced navigation system

    Science.gov (United States)

    Yan, Lei; Lin, Yi; Tong, Qingxi

    2007-11-01

    The problem about reference grid data's overlarge spacing, which makes deviated estimation of un-surveyed points and poor accuracy of correlation positioning, has been embarrassing Geophysical Fields of the Earth (GFE) referenced navigation research. The super-resolution images reconstruction methods in remote sensing field give some inspiration, and its brief method, Maximum A-Posterior (MAP) based on Bayesian theory, is transplanted on grid data. The proposed algorithm named MAP-G can implement interpolation of reference data field by reflecting whole distribution trend. Comparison with traditional interpolation algorithms and simulation experiments on underwater terrain/gravity-aided navigation platform, indicate that MAP-G algorithm can effectively improve navigation's performance.

  1. Terahertz digital holography using angular spectrum and dual wavelength reconstruction methods.

    Science.gov (United States)

    Heimbeck, Martin S; Kim, Myung K; Gregory, Don A; Everitt, Henry O

    2011-05-09

    Terahertz digital off-axis holography is demonstrated using a Mach-Zehnder interferometer with a highly coherent, frequency tunable, continuous wave terahertz source emitting around 0.7 THz and a single, spatially-scanned Schottky diode detector. The reconstruction of amplitude and phase objects is performed digitally using the angular spectrum method in conjunction with Fourier space filtering to reduce noise from the twin image and DC term. Phase unwrapping is achieved using the dual wavelength method, which offers an automated approach to overcome the 2π phase ambiguity. Potential applications for nondestructive test and evaluation of visually opaque dielectric and composite objects are discussed.

  2. Robust joint full-waveform inversion of time-lapse seismic data sets with total-variation regularization

    CERN Document Server

    Maharramov, Musa

    2014-01-01

    We present a technique for reconstructing subsurface velocity model changes from time-lapse seismic survey data using full-waveform inversion (FWI). The technique is based on simultaneously inverting multiple survey vintages, with model difference regularization using the total variation (TV) seminorm. We compare the new TV-regularized time-lapse FWI with the $L_2$-regularized joint inversion proposed in our earlier work, using synthetic data sets that exhibit survey repeatability issues. The results demonstrate clear advantages of the proposed TV-regularized joint inversion over alternatives methods for recovering production-induced model changes that are due to both fluid substitution and geomechanical effects.

  3. Binary Black Holes: Mergers, Dynamics, and Waveforms

    Science.gov (United States)

    Centrella, Joan

    2007-04-01

    The final merger of two black holes is expected to be the strongest gravitational wave source for ground-based interferometers such as LIGO, VIRGO, and GEO600, as well as the space-based interferometer LISA. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new simulations that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, data analysis, and astrophysics.

  4. Full waveform inversion for ultrasonic flaw identification

    Science.gov (United States)

    Seidl, Robert; Rank, Ernst

    2017-02-01

    Ultrasonic Nondestructive Testing is concerned with detecting flaws inside components without causing physical damage. It is possible to detect flaws using ultrasound measurements but usually no additional details about the flaw like position, dimension or orientation are available. The information about these details is hidden in the recorded experimental signals. The idea of full waveform inversion is to adapt the parameters of an initial simulation model of the undamaged specimen by minimizing the discrepancy between these simulated signals and experimentally measured signals of the flawed specimen. Flaws in the structure are characterized by a change or deterioration in the material properties. Commonly, full waveform inversion is mostly applied in seismology on a larger scale to infer mechanical properties of the earth. We propose to use acoustic full waveform inversion for structural parameters to visualize the interior of the component. The method is adapted to US NDT by combining multiple similar experiments on the test component as the typical small amount of sensors is not sufficient for a successful imaging. It is shown that the combination of simulations and multiple experiments can be used to detect flaws and their position, dimension and orientation in emulated simulation cases.

  5. A Parallel Implicit Reconstructed Discontinuous Galerkin Method for Compressible Flows on Hybrid Grids

    Science.gov (United States)

    Xia, Yidong

    The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as

  6. A Temporoparietal Fascia Pocket Method in Elevation of Reconstructed Auricle for Microtia.

    Science.gov (United States)

    Kurabayashi, Takashi; Asato, Hirotaka; Suzuki, Yasutoshi; Kaji, Nobuyuki; Mitoma, Yoko

    2017-04-01

    In two-stage procedures for reconstruction of microtia, an axial flap of temporoparietal fascia is widely used to cover the costal cartilage blocks placed behind the framework. Although a temporoparietal fascia flap is undoubtedly reliable, use of the flap is associated with some morbidity and comes at the expense of the option for salvage surgery. The authors devised a simplified procedure for covering the cartilage blocks by creating a pocket in the postauricular temporoparietal fascia. In this procedure, the constructed auricle is elevated from the head superficially to the temporoparietal fascia, and a pocket is created under the temporoparietal fascia and the capsule of the auricle framework. Then, cartilage blocks are inserted into the pocket and fixed. A total of 38 reconstructed ears in 38 patients with microtia ranging in age from 9 to 19 years were elevated using the authors' method from 2002 to 2014 and followed for at least 5 months. To evaluate the long-term stability of the method, two-way analysis of variance (p fascia flap method versus a temporoparietal fascia pocket method) over long-term follow-up. Good projection of the auricles and creation of well-defined temporoauricular sulci were achieved. Furthermore, the sulci had a tendency to hold their steep profile over a long period. The temporoparietal fascia pocket method is simple but produces superior results. Moreover, pocket creation is less invasive and has the benefit of sparing temporoparietal fascia flap elevation. Therapeutic, IV.

  7. Reconstruction of normal and abnormal gastric electrical sources using a potential based inverse method.

    Science.gov (United States)

    Kim, J H K; Du, P; Cheng, L K

    2013-09-01

    The use of cutaneous recordings to non-invasively characterize gastric slow waves has had limited clinical acceptance, primarily due to the uncertainty in relating the recorded signal to the underlying gastric slow waves. In this study we aim to distinguish and quantitatively reconstruct different slow wave patterns using an inverse algorithm. Slow wave patterns corresponding to normal, retrograde and uncoupled activity at different frequencies were imposed on a stomach surface model. Gaussian noise (10% peak-to-peak) was added to cutaneous potentials and the Greensite-Tikhonov inverse method was used to reconstruct the potentials on the stomach. The effectiveness of the number or location of electrodes on the accuracy of the inverse solutions was investigated using four different electrode configurations. Results showed the reconstructed solutions were able to reliably distinguish the different slow wave patterns and waves with lower frequency were better correlated to the known solution than those with higher. The use of up to 228 electrodes improved the accuracy of the inverse solutions. However, the use of 120 electrodes concentrated around the stomach was able to achieve similar results. The most efficient electrode configuration for our model involved 120 electrodes with an inter-electrode distance of 32 mm.

  8. Information Reconstruction Method for Improved Clustering and Diagnosis of Generic Gearbox Signals

    Directory of Open Access Journals (Sweden)

    Jay Lee

    2011-01-01

    Full Text Available Gearbox is a very complex mechanical system that can generate vibrations from its various elements such as gears, shafts, and bearings. Transmission path effect, signal coupling, and noise contamination can further induce difficulties to the development of a prognostics and health management (PHM system for a gearbox. This paper introduces a novel information reconstruction approach to clustering and diagnosis of gearbox signals in varying operating conditions. First, vibration signal is transformed from time domain to frequency domain with Fast Fourier Transform (FFT. Then, reconstruction filters are employed to sift the frequency components in FFT spectrum to retain the information of interest. Features are further extracted to calculate the coefficients of the reconstructed energy expression. Then, correlation analysis (CA and distance measurement (DM techniques are utilized to cluster signals under diverse shaft speeds and loads. Finally, energy coefficients are used as health indicators for the purpose of fault diagnosis of the rotating elements in the gearbox. The proposed method was used to solve the gearbox problem of the 2009 PHM Conference Data Analysis Competition and won with the best score in both professional and student categories.

  9. Selective capsulotomies of the expanded breast as a remodelling method in two-stage breast reconstruction.

    Science.gov (United States)

    Grimaldi, Luca; Campana, Matteo; Brandi, Cesare; Nisi, Giuseppe; Brafa, Anna; Calabrò, Massimiliano; D'Aniello, Carlo

    2013-06-01

    The two-stage breast reconstruction with tissue expander and prosthesis is nowadays a common method for achieving a satisfactory appearance in selected patients who had a mastectomy, but its most common aesthetic drawback is represented by an excessive volumetric increment of the superior half of the reconstructed breast, with a convexity of the profile in that area. A possible solution to limit this effect, and to fulfil the inferior pole, may be obtained by reducing the inferior tissue resistance by means of capsulotomies. This study reports the effects of various types of capsulotomies, performed in 72 patients after removal of the mammary expander, with the aim of emphasising the convexity of the inferior mammary aspect in the expanded breast. According to each kind of desired modification, possible solutions are described. On the basis of subjective and objective evaluations, an overall high degree of satisfaction has been evidenced. The described selective capsulotomies, when properly carried out, may significantly improve the aesthetic results in two-stage reconstructed breasts, with no additional scars, with minimal risks, and with little lengthening of the surgical time.

  10. Testing for causality in reconstructed state spaces by an optimized mixed prediction method

    Science.gov (United States)

    Krakovská, Anna; Hanzely, Filip

    2016-11-01

    In this study, a method of causality detection was designed to reveal coupling between dynamical systems represented by time series. The method is based on the predictions in reconstructed state spaces. The results of the proposed method were compared with outcomes of two other methods, the Granger VAR test of causality and the convergent cross-mapping. We used two types of test data. The first test example is a unidirectional connection of chaotic systems of Rössler and Lorenz type. The second one, the fishery model, is an example of two correlated observables without a causal relationship. The results showed that the proposed method of optimized mixed prediction was able to reveal the presence and the direction of coupling and distinguish causality from mere correlation as well.

  11. Perception of patient appearance following various methods of reconstruction after orbital exenteration.

    Science.gov (United States)

    Kuiper, Justin J; Zimmerman, M Bridget; Pagedar, Nitin A; Carter, Keith D; Allen, Richard C; Shriver, Erin M

    2016-08-01

    This article compares the perception of health and beauty of patients after exenteration reconstruction with free flap, eyelid-sparing, split-thickness skin graft, or with a prosthesis. Cross-sectional evaluation was performed through a survey sent to all students enrolled at the University of Iowa Carver College of Medicine. The survey included inquiries about observer comfort, perceived patient health, difficulty of social interactions, and which patient appearance was least bothersome. Responses were scored from 0 to 4 for each method of reconstruction and an orbital prosthesis. A Friedman test was used to compare responses among each method of repair and the orbital prosthesis for each of the four questions, and if this was significant, then post-hoc pairwise comparison was performed with p values adjusted using Bonferroni's method. One hundred and thirty two students responded to the survey and 125 completed all four questions. Favorable response for all questions was highest for the orbital prosthesis and lowest for the split-thickness skin graft. Patient appearance with an orbital prosthesis had significantly higher scores compared to patient appearance with each of the other methods for all questions (p value < 0.0001). Second highest scores were for the free flap, which were higher than eyelid-sparing and significantly higher compared to split-thickness skin grafting (p value: Question 1: < 0.0001; Question 2: 0.0005; Question 3: 0.006; and Question 4: 0.019). The orbital prosthesis was the preferred post-operative appearance for the exenterated socket for each question. Free flap was the preferred appearance for reconstruction without an orbital prosthesis. Split-thickness skin graft was least preferred for all questions.

  12. Waveform-dependent absorbing metasurfaces

    CERN Document Server

    Wakatsuchi, Hiroki; Rushton, Jeremiah J; Sievenpiper, Daniel F

    2014-01-01

    We present the first use of a waveform-dependent absorbing metasurface for high-power pulsed surface currents. The new type of nonlinear metasurface, composed of circuit elements including diodes, is capable of storing high power pulse energy to dissipate it between pulses, while allowing propagation of small signals. Interestingly, the absorbing performance varies for high power pulses but not for high power continuous waves (CWs), since the capacitors used are fully charged up. Thus, the waveform dependence enables us to distinguish various signal types (i.e. CW or pulse) even at the same frequency, which potentially creates new kinds of microwave technologies and applications.

  13. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    Energy Technology Data Exchange (ETDEWEB)

    Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.

  14. Decoupled Method for Reconstruction of Surface Conditions From Internal Temperatures On Ablative Materials With Uncertain Recession Model

    Science.gov (United States)

    Oliver, A. Brandon

    2017-01-01

    Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.

  15. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth

    Science.gov (United States)

    Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.

    2015-03-01

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  16. Signal Separation and Reconstruction Method for Simultaneously Received Multi-System Signals in Flexible Wireless System

    Science.gov (United States)

    Yamada, Takayuki; Lee, Doohwan; Shiba, Hiroyuki; Yamaguchi, Yo; Akabane, Kazunori; Uehara, Kazuhiro

    We previously proposed a unified wireless system called “Flexible Wireless System”. Comprising of flexible access points and a flexible signal processing unit, it collectively receives a wideband spectrum that includes multiple signals from various wireless systems. In cases of simultaneous multiple signal reception, however, reception performance degrades due to the interference among multiple signals. To address this problem, we propose a new signal separation and reconstruction method for spectrally overlapped signals. The method analyzes spectral information obtained by the short-time Fourier transform to extract amplitude and phase values at each center frequency of overlapped signals at a flexible signal processing unit. Using these values enables signals from received radio wave data to be separated and reconstructed for simultaneous multi-system reception. In this paper, the BER performance of the proposed method is evaluated using computer simulations. Also, the performance of the interference suppression is evaluated by analyzing the probability density distribution of the amplitude of the overlapped interference on a symbol of the received signal. Simulation results confirmed the effectiveness of the proposed method.

  17. A new near-lossless EEG compression method using ANN-based reconstruction technique.

    Science.gov (United States)

    Hejrati, Behzad; Fathi, Abdolhossein; Abdali-Mohammadi, Fardin

    2017-08-01

    Compression algorithm is an essential part of Telemedicine systems, to store and transmit large amount of medical signals. Most of existing compression methods utilize fixed transforms such as discrete cosine transform (DCT) and wavelet and usually cannot efficiently extract signal redundancy especially for non-stationary signals such as electroencephalogram (EEG). In this paper, we first propose learning-based adaptive transform using combination of DCT and artificial neural network (ANN) reconstruction technique. This adaptive ANN-based transform is applied to the DCT coefficients of EEG data to reduce its dimensionality and also to estimate the original DCT coefficients of EEG in the reconstruction phase. To develop a new near lossless compression method, the difference between the original DCT coefficients and estimated ones are also quantized. The quantized error is coded using Arithmetic coding and sent along with the estimated DCT coefficients as compressed data. The proposed method was applied to various datasets and the results show higher compression rate compared to the state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Three-dimensional reconstruction method of Tang Dynasty building based on point clouds

    Science.gov (United States)

    Wang, Yinghui; Zhang, Huanhuan; Zhao, Yanni; Hao, Wen; Ning, Xiaojuan; Shi, Zhenghao; Zhao, Minghua

    2015-12-01

    We present a method to reconstruct the three-dimensional (3-D) Tang Dynasty building model from raw point clouds. Different from previous building modeling techniques, our method is developed for the Tang Dynasty building which does not exhibit planar primitives, facades, and repetitive structural elements as residential low- or high-rise buildings. The proposed method utilizes the structural property of the Tang Dynasty building to process the original point clouds. First, the raw point clouds are sliced into many parallel layers to generate a top-bottom hierarchical structure, and each layer is resampled to achieve a subset purification of 3-D point clouds. In addition, a series of different building components of the building are recognized by clustering these purifications of 3-D point clouds. In particular, we get the tree-structured topology of these different building components during slicing and clustering. Second, different solutions are explored to reconstruct its 3-D model for different building components. The overall model of building can be gotten based on the building components and tree-structured topology. Experimental results demonstrate that the proposed method is more efficient for generating a high realistic 3-D model of the Tang Dynasty building.

  19. Implementation of a fast running full core pin power reconstruction method in DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Torres, Armando Miguel [Instituto Nacional de Investigaciones Nucleares, Department of Nuclear Systems, Carretera Mexico – Toluca s/n, La Marquesa, 52750 Ocoyoacac (Mexico); Sanchez-Espinoza, Victor Hugo, E-mail: victor.sanchez@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-vom-Helmhotz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Kliem, Sören; Gommlich, Andre [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany)

    2014-07-01

    Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions.

  20. A comparative study of two reconstructive methods and different recommendations in intracavitary brachytherapy

    Directory of Open Access Journals (Sweden)

    KR Muralidhar

    2010-01-01

    Full Text Available Purpose: Intracavitary brachytherapy (ICB is a widely used technique in the treatment of cervical cancer. In our Institute, we use different reconstructive methods in the conventional planning procedure. The main aim of this study was to compare these methods using critical organ doses obtained in various treatment plans. There is a small difference in the recommendations in selecting bladder dose point between ICRU (International Commission on Radiation Units & Measurements -38 and ABS (American Brachytherapy Society. The second objective of the study was to find the difference in bladder dose using both recommendations.Material and methods: We have selected two methods: variable angle method (M1 and orthogonal method (M2. Two orthogonal sets of radiographs were taken into consideration using conventional simulator. All four radiographs were used in M1 and only two radiographs were used in M2. Bladder and rectum doses were calculated using ICRU-38 recommendations. For maximum bladder dose reference point as per the ABS recommendation, 4 to 5 reference points were marked on Foley’s balloon.Results: 64% of plans were showing more bladder dose and 50% of plans presented more rectum dose in M1 compared to M2. Many of the plans reviled maximum bladder dose point, other than ICRU-38 bladder point in both methods.Variation was exceeded in 5% of considerable number of plans.Conclusions: We observed a difference in critical organ dose between two studied methods. There is an advantage of using variable angle reconstruction method in identifying the catheters. It is useful to follow ABS recommendation to find maximum bladder dose.

  1. Noninvasive calculation of the aortic blood pressure waveform from the flow velocity waveform: a proof of concept.

    Science.gov (United States)

    Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi; Chowienczyk, Phil

    2015-09-01

    Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the

  2. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis.

    Science.gov (United States)

    Kuhnert, Georg; Boellaard, Ronald; Sterzer, Sergej; Kahraman, Deniz; Scheffler, Matthias; Wolf, Jürgen; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten

    2016-02-01

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT.

  3. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)

    2016-02-15

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)

  4. Application of a data-driven simulation method to the reconstruction of the coronal magnetic field

    Institute of Scientific and Technical Information of China (English)

    Yu-Liang Fan; Hua-Ning Wang; Han He; Xiao-Shuai Zhu

    2012-01-01

    Ever since the magnetohydrodynamic (MHD) method for extrapolation of the solar coronal magnetic field was first developed to study the dynamic evolution of twisted magnetic flux tubes,it has proven to be efficient in the reconstruction of the solar coronal magnetic field.A recent example is the so-called data-driven simulation method (DDSM),which has been demonstrated to be valid by an application to model analytic solutions such as a force-free equilibrium given by Low and Lou.We use DDSM for the observed magnetograms to reconstruct the magnetic field above an active region.To avoid an unnecessary sensitivity to boundary conditions,we use a classical total variation diminishing Lax-Friedrichs formulation to iteratively compute the full MHD equations.In order to incorporate a magnetogram consistently and stably,the bottom boundary conditions are derived from the characteristic method.In our simulation,we change the tangential fields continually from an initial potential field to the vector magnetogram.In the relaxation,the initial potential field is changed to a nonlinear magnetic field until the MHD equilibrium state is reached.Such a stable equilibrium is expected to be able to represent the solar atmosphere at a specified time.By inputting the magnetograms before and after the X3.4 flare that occurred on 2006 December 13,we find a topological change after comparing the magnetic field before and after the flare.Some discussions are given regarding the change of magnetic configuration and current distribution.Furthermore,we compare the reconstructed field line configuration with the coronal loop observations by XRT onboard Hinode.The comparison shows a relatively good correlation.

  5. Demonstration of a forward iterative method to reconstruct brachytherapy seed configurations from x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Martin J; Todor, Dorin A [Department of Radiation Oncology, Virginia Commonwealth University, Richmond VA 23298 (United States)

    2005-06-07

    By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.

  6. Demonstration of a forward iterative method to reconstruct brachytherapy seed configurations from x-ray projections

    Science.gov (United States)

    Murphy, Martin J.; Todor, Dorin A.

    2005-06-01

    By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.

  7. Reduced order model for binary neutron star waveforms with tidal interactions

    Science.gov (United States)

    Lackey, Benjamin; Bernuzzi, Sebastiano; Galley, Chad

    2016-03-01

    Observations of inspiralling binary neutron star (BNS) systems with Advanced LIGO can be used to determine the unknown neutron-star equation of state by measuring the phase shift in the gravitational waveform due to tidal interactions. Unfortunately, this requires computationally efficient waveform models for use in parameter estimation codes that typically require 106-107 sequential waveform evaluations, as well as accurate waveform models with phase errors less than 1 radian over the entire inspiral to avoid systematic errors in the measured tidal deformability. The effective one body waveform model with l = 2 , 3, and 4 tidal multipole moments is currently the most accurate model for BNS systems, but takes several minutes to evaluate. We develop a reduced order model of this waveform by constructing separate orthonormal bases for the amplitude and phase evolution. We find that only 10-20 bases are needed to reconstruct any BNS waveform with a starting frequency of 10 Hz. The coefficients of these bases are found with Chebyshev interpolation over the waveform parameter space. This reduced order model has maximum errors of 0.2 radians, and results in a speedup factor of more than 103, allowing parameter estimation codes to run in days to weeks rather than decades.

  8. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  9. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    Science.gov (United States)

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  10. Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Li Lei

    2015-04-01

    Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.

  11. Optimal current waveforms for brushless permanent magnet motors

    Science.gov (United States)

    Moehle, Nicholas; Boyd, Stephen

    2015-07-01

    In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.

  12. Estimation of airway obstruction using oximeter plethysmograph waveform data

    Directory of Open Access Journals (Sweden)

    Desmond Renee' A

    2005-06-01

    Full Text Available Abstract Background Validated measures to assess the severity of airway obstruction in patients with obstructive airway disease are limited. Changes in the pulse oximeter plethysmograph waveform represent fluctuations in arterial flow. Analysis of these fluctuations might be useful clinically if they represent physiologic perturbations resulting from airway obstruction. We tested the hypothesis that the severity of airway obstruction could be estimated using plethysmograph waveform data. Methods Using a closed airway circuit with adjustable inspiratory and expiratory pressure relief valves, airway obstruction was induced in a prospective convenience sample of 31 healthy adult subjects. Maximal change in airway pressure at the mouthpiece was used as a surrogate measure of the degree of obstruction applied. Plethysmograph waveform data and mouthpiece airway pressure were acquired for 60 seconds at increasing levels of inspiratory and expiratory obstruction. At each level of applied obstruction, mean values for maximal change in waveform area under the curve and height as well as maximal change in mouth pressure were calculated for sequential 7.5 second intervals. Correlations of these waveform variables with mouth pressure values were then performed to determine if the magnitude of changes in these variables indicates the severity of airway obstruction. Results There were significant relationships between maximal change in area under the curve (P Conclusion The findings suggest that mathematic interpretation of plethysmograph waveform data may estimate the severity of airway obstruction and be of clinical utility in objective assessment of patients with obstructive airway diseases.

  13. Biomass Estimation for Individual Trees using Waveform LiDAR

    Science.gov (United States)

    Wang, K.; Kumar, P.; Dutta, D.

    2015-12-01

    Vegetation biomass information is important for many ecological models that include terrestrial vegetation in their simulations. Biomass has strong influences on carbon, water, and nutrient cycles. Traditionally biomass estimation requires intensive, and often destructive, field measurements. However, with advances in technology, airborne LiDAR has become a convenient tool for acquiring such information on a large scale. In this study, we use infrared full waveform LiDAR to estimate biomass information for individual trees in the Sangamon River basin in Illinois, USA. During this process, we also develop automated geolocation calibration algorithms for raw waveform LiDAR data. In the summer of 2014, discrete and waveform LiDAR data were collected over the Sangamon River basin. Field measurements commonly used in biomass equations such as diameter at breast height and total tree height were also taken for four sites across the basin. Using discrete LiDAR data, individual trees are delineated. For each tree, a voxelization methods is applied to all waveforms associated with the tree to result in a pseudo-waveform. By relating biomass extrapolated using field measurements from a training set of trees to waveform metrics for each corresponding tree, we are able to estimate biomass on an individual tree basis. The results can be especially useful as current models increase in resolution.

  14. Influential factors for pressure pulse waveform in healthy young adults.

    Science.gov (United States)

    Du, Yi; Wang, Ling; Li, Shuyu; Zhi, Guang; Li, Deyu; Zhang, Chi

    2015-01-01

    The effects of gender and other contributory factors on pulse waveform are still under arguments. In view of different results caused by few considerations of possible influential factors and general agreement of gender relating to pulse waveform, this study aims to address the confounding factors interfering with the association between gender and pulse waveform characteristics. A novel method was proposed to noninvasively detect pressure pulse wave and assess the morphology of pulse wave. Forty healthy young subjects were included in the present research. Height, weight, systolic blood pressure (SBP), and diastolic blood pressure (DBP) were measured manually and body mass index (BMI), pulse blood pressure (PP) and heart rate (HR) were calculated automatically. Student's t test was used to analyze the gender difference and analysis of variance (ANOVA) to examine the effects of intrinsic factors. Univariate regression analysis was performed to assess the main factors on the waveform characteristics. Waveform features were found significantly different between genders. However this study indicates that the main factors for time-related and amplitude-related parameters are HR and SBP respectively. In conclusion, the impact of HR and SBP on pulse waveform features should not be underestimated, especially when analyzing the gender difference.

  15. ALTERNATIVE METHODS TO MULTIPLE CORRESPONDENCE ANALYSIS IN RECONSTRUCTING THE RELEVANT INFORMATION IN A BURT'S TABLE

    Directory of Open Access Journals (Sweden)

    Sergio Camiz

    2016-04-01

    Full Text Available ABSTRACT In this work, the reconstruction of the Burt's table, Greenacre (1988's Joint Correspondence Analysis (JCA, and Gower & Hand (1996's Extended Matching Coefficient (EMC are compared to Multiple Correspondence Analysis (MCA in order to check the quality of the methods. In particular, for the whole table, the ability is considered separately the diagonal, and the off-diagonal tables, that is the ability to describe either each character's distribution or the interaction between pairs of characters, or both. The theoretical aspects are discussed first, and finally the results obtained in an application are shown and discussed.

  16. Reconstruction of the number and positions of dipoles and quadrupoles using an algebraic method

    Energy Technology Data Exchange (ETDEWEB)

    Nara, Takaaki [University of Electro-Communications, 1-5-1, Chofugaoka, Chofu-city, Tokyo, 182-8585 (Japan)], E-mail: nara@mce.uec.ac.jp

    2008-11-01

    Localization of dipoles and quadrupoles is important in inverse potential analysis, since they can effectively express spatially extended sources with a small number of parmeters. This paper proposes an algebraic method for reconstruction of pole positions as well as the number of dipole-quadrupoles without providing an initial parameter guess or iterative computing forward solutions. It is also shown that a magnetoencephalography inverse problem with a source model of dipole-quadrupoles in 3D space is reduced into the same problem as in 2D space.

  17. Design Consideration and Reconstruction Method for Double-source Double-multislice Spiral CT

    Institute of Scientific and Technical Information of China (English)

    LIU Zun-gang; ZHAO Jun; ZHUANG Tian-ge

    2007-01-01

    To accelerate the scan speed and improve the image quality, a new type of CT configuration, "doublesource double-multislice spiral CT" (DSDMS-CT), which is based on two sets of single-source multislice spiral CT was proposed with a special reconstruction algorithm.Simulation results using the fan-beam filtered backprojection algorithm with a special interpolation method were presented for both single-source multislice spiral CT and DSDMS-CT.The results of new CT model show that it scans faster than the traditional spiral CT and has a better slice sensitivity profile (SSP) with larger pitch value.

  18. A method for brain 3D surface reconstruction from MR images

    Science.gov (United States)

    Zhao, De-xin

    2014-09-01

    Due to the encephalic tissues are highly irregular, three-dimensional (3D) modeling of brain always leads to complicated computing. In this paper, we explore an efficient method for brain surface reconstruction from magnetic resonance (MR) images of head, which is helpful to surgery planning and tumor localization. A heuristic algorithm is proposed for surface triangle mesh generation with preserved features, and the diagonal length is regarded as the heuristic information to optimize the shape of triangle. The experimental results show that our approach not only reduces the computational complexity, but also completes 3D visualization with good quality.

  19. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  20. A Novel Method to Identify Inrush Current Based on Asymmetric Characteristics of Waveform%基于波形不对称特征的变压器励磁涌流识别新方法

    Institute of Scientific and Technical Information of China (English)

    孙洋; 肖勇

    2011-01-01

    通过对变压器各种情况下波形特征的分析,提出一种利用平行四边形平行度检验变压器是否发生励磁涌流的新方法.当励磁涌流时,波形呈现出严重的上下及前后半波不对称,构成出的四边形很不规则;而变压器发生内部故障时,所构成的四边形则近似为平行四边形,所以通过平行度的大小可以有效区分变压器励磁涌流和内部故障.动模试验分析结果表明,该方法是可靠的,即使对轻微匝间故障也有足够的灵敏度.%Analyzing the waveform characteristics of transformer in various situations, this paper proposes a new method to identify whether inrush currents happen in uansformer. When inrush current happen, the waveform is in asymmetry on up and down or left and right so seriously that an irregular quadrilateral appears. In contrast, while internal faults happen, the quadrilateral is out of shape only slightly. Therefore, the degree of parallelism of quadrilateral could be taken as an index to identify inrush currents and internal faults. The experimental results indicate that the proposed method is reliable, and sensitive even to the turn to turn faults of low level.

  1. Microwave reconstruction method using a circular antenna array cooperating with an internal transmitter

    Science.gov (United States)

    Zhou, Huiyuan; Narayanan, Ram M.; Balasingham, Ilangko

    2016-05-01

    This paper addresses the detection and imaging of a small tumor underneath the inner surface of the human intestine. The proposed system consists of an around-body antenna array cooperating with a capsule carrying a radio frequency (RF) transmitter located within the human body. This paper presents a modified Levenberg-Marquardt algorithm to reconstruct the dielectric profile with this new system architecture. Each antenna around the body acts both as a transmitter and a receiver for the remaining array elements. In addition, each antenna also acts as a receiver for the capsule transmitter inside the body to collect additional data which cannot be obtained from the conventional system. In this paper, the synthetic data are collected from biological objects, which are simulated for the circular phantoms using CST studio software. For the imaging part, the Levenberg-Marquardt algorithm, which is a kind of Newton inversion method, is chosen to reconstruct the dielectric profile of the objects. The imaging process involves a two-part innovation. The first part is the use of a dual mesh method which builds a dense mesh grid around in the region around the transmitter and a coarse mesh for the remaining area. The second part is the modification of the Levenberg-Marquardt method to use the additional data collected from the inside transmitter. The results show that the new system with the new imaging algorithm can obtain high resolution images even for small tumors.

  2. A Novel Method of Orbital Floor Reconstruction Using Virtual Planning, 3-Dimensional Printing, and Autologous Bone.

    Science.gov (United States)

    Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan

    2016-08-01

    Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures.

  3. Fast and accurate generation method of PSF-based system matrix for PET reconstruction

    Science.gov (United States)

    Sun, Xiao-Li; Liu, Shuang-Quan; Yun, Ming-Kai; Li, Dao-Wu; Gao, Juan; Li, Mo-Han; Chai, Pei; Tang, Hao-Hui; Zhang, Zhi-Ming; Wei, Long

    2017-04-01

    This work investigates the positional single photon incidence response (P-SPIR) to provide an accurate point spread function (PSF)-contained system matrix and its incorporation within the image reconstruction framework. Based on the Geant4 Application for Emission Tomography (GATE) simulation, P-SPIR theory takes both incidence angle and incidence position of the gamma photon into account during crystal subdivision, instead of only taking the former into account, as in single photon incidence response (SPIR). The response distribution obtained in this fashion was validated using Monte Carlo simulations. In addition, two-block penetration and normalization of the response probability are introduced to improve the accuracy of the PSF. With the incorporation of the PSF, the homogenization model is then analyzed to calculate the spread distribution of each line-of-response (LOR). A primate PET scanner, Eplus-260, developed by the Institute of High Energy Physics, Chinese Academy of Sciences (IHEP), was employed to evaluate the proposed method. The reconstructed images indicate that the P-SPIR method can effectively mitigate the depth-of-interaction (DOI) effect, especially at the peripheral area of field-of-view (FOV). Furthermore, the method can be applied to PET scanners with any other structures and list-mode data format with high flexibility and efficiency. Supported by National Natural Science Foundation of China (81301348) and China Postdoctoral Science Foundation (2015M570154)

  4. Evaluation of a direct 4D reconstruction method using generalised linear least squares for estimating nonlinear micro-parametric maps.

    Science.gov (United States)

    Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J

    2014-11-01

    Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.

  5. Boundary Element Method for Reconstructing Absorption and Diffusion Coefficients of Biological Tissues in DOT/MicroCT Imaging.

    Science.gov (United States)

    Xie, Wenhao; Deng, Yong; Lian, Lichao; Yan, Dongmei; Yang, Xiaoquan; Luo, Qingming

    2016-01-01

    The functional information, the absorption and diffusion coefficients, as well as the structural information of biological tissues can be provided by the DOT(Diffuse Optical Tomograph)/MicroCT. In this paper, we use boundary element method to calculate the forward problem of DOT based on the structure prior given by the MicroCT, and then we reconstruct the absorption and diffusion coefficients of different biological tissues by the Levenberg-Marquardt algorithm. The method only needs surface meshing, reducing the complexity of calculation; in addition, it reconstructs a single value within an organ, which reduces the ill-posedness of the inverse problem to make reconstruction results have good noise stability. This indicates that the boundary element method-based reconstruction can serve as an new scheme for getting absorption and diffusion coefficients in DOT/MicroCT multimodality imaging.

  6. Online monitoring of gas-solid two-phase flow using projected CG method in ECT image reconstruction

    Institute of Scientific and Technical Information of China (English)

    Qi wang; Chengyi Yang; Huaxiang Wang; Ziqiang Cui; Zhentao Gao

    2013-01-01

    Electrical capacitance tomography (ECT) is a promising technique for multi-phase flow measurement due to its high speed,low cost and non-intrusive sensing.Image reconstruction for ECT is an inverse problem of finding the permittivity distribution of an object by measuring the electrical capacitances between sets of electrodes placed around its periphery.The conjugate gradient (CG) method is a popular image reconstruction method for ECT,in spite of its low convergence rate.In this paper,an advanced version of the CG method,the projected CG method,is used for image reconstruction of an ECT system.The solution space is projected into the Krylov subspace and the inverse problem is solved by the CG method in a low-dimensional specific subspace.Both static and dynamic experiments were carried out for gas-solid two-phase flows.The flow regimes are identified using the reconstructed images obtained with the projected CG method.The results obtained indicate that the projected CG method improves the quality of reconstructed images and dramatically reduces computation time,as compared to the traditional sensitivity,Landweber,and CG methods.Furthermore,the projected CG method was also used to estimate the important parameters of the pneumatic conveying process,such as the volume concentration,flow velocity and mass flow rate of the solid phase.Therefore,the projected CG method is considered suitable for online gas-solid two-phase flow measurement.

  7. Comparison of Short-term Complications Between 2 Methods of Coracoclavicular Ligament Reconstruction

    Science.gov (United States)

    Rush, Lane N.; Lake, Nicholas; Stiefel, Eric C.; Hobgood, Edward R.; Ramsey, J. Randall; O’Brien, Michael J.; Field, Larry D.; Savoie, Felix H.

    2016-01-01

    Background: Numerous techniques have been used to treat acromioclavicular (AC) joint dislocation, with anatomic reconstruction of the coracoclavicular (CC) ligaments becoming a popular method of fixation. Anatomic CC ligament reconstruction is commonly performed with cortical fixation buttons (CFBs) or tendon grafts (TGs). Purpose: To report and compare short-term complications associated with AC joint stabilization procedures using CFBs or TGs. Study Design: Cohort study; Level of evidence, 3. Methods: We conducted a retrospective review of the operative treatment of AC joint injuries between April 2007 and January 2013 at 2 institutions. Thirty-eight patients who had undergone a procedure for AC joint instability were evaluated. In these 38 patients with a mean age of 36.2 years, 18 shoulders underwent fixation using the CFB technique and 20 shoulders underwent reconstruction using the TG technique. Results: The overall complication rate was 42.1% (16/38). There were 11 complications in the 18 patients in the CFB group (61.1%), including 7 construct failures resulting in a loss of reduction. The most common mode of failure was suture breakage (n = 3), followed by button migration (n = 2) and coracoid fracture (n = 2). There were 5 complications in the TG group (25%), including 3 cases of asymptomatic subluxation, 1 symptomatic suture granuloma, and 1 superficial infection. There were no instances of construct failure seen in TG fixations. CFB fixation was found to have a statistically significant increase in complications (P = .0243) and construct failure (P = .002) compared with TG fixation. Conclusion: CFB fixation was associated with a higher rate of failure and higher rate of early complications when compared with TG fixation. PMID:27504468

  8. An optimal transport approach for seismic tomography: application to 3D full waveform inversion

    Science.gov (United States)

    Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.

    2016-11-01

    The use of optimal transport distance has recently yielded significant progress in image processing for pattern recognition, shape identification, and histograms matching. In this study, the use of this distance is investigated for a seismic tomography problem exploiting the complete waveform; the full waveform inversion. In its conventional formulation, this high resolution seismic imaging method is based on the minimization of the L 2 distance between predicted and observed data. Application of this method is generally hampered by the local minima of the associated L 2 misfit function, which correspond to velocity models matching the data up to one or several phase shifts. Conversely, the optimal transport distance appears as a more suitable tool to compare the misfit between oscillatory signals, for its ability to detect shifted patterns. However, its application to the full waveform inversion is not straightforward, as the mass conservation between the compared data cannot be guaranteed, a crucial assumption for optimal transport. In this study, the use of a distance based on the Kantorovich-Rubinstein norm is introduced to overcome this difficulty. Its mathematical link with the optimal transport distance is made clear. An efficient numerical strategy for its computation, based on a proximal splitting technique, is introduced. We demonstrate that each iteration of the corresponding algorithm requires solving the Poisson equation, for which fast solvers can be used, relying either on the fast Fourier transform or on multigrid techniques. The development of this numerical method make possible applications to industrial scale data, involving tenths of millions of discrete unknowns. The results we obtain on such large scale synthetic data illustrate the potentialities of the optimal transport for seismic imaging. Starting from crude initial velocity models, optimal transport based inversion yields significantly better velocity reconstructions than those based on

  9. Analysis of limb function after various reconstruction methods according to tumor location following resection of pediatric malignant bone tumors

    Directory of Open Access Journals (Sweden)

    Tokuhashi Yasuaki

    2010-05-01

    Full Text Available Abstract Background In the reconstruction of the affected limb in pediatric malignant bone tumors, since the loss of joint function affects limb-length discrepancy expected in the future, reconstruction methods that not only maximally preserve the joint function but also maintain good limb function are necessary. We analysis limb function of reconstruction methods by tumor location following resection of pediatric malignant bone tumors. Patients and methods We classified the tumors according to their location into 3 types by preoperative MRI, and evaluated reconstruction methods after wide resection, paying attention to whether the joint function could be preserved. The mean age of the patients was 10.6 years, Osteosarcoma was observed in 26 patients, Ewing's sarcoma in 3, and PNET(primitive neuroectodermal tumor and chondrosarcoma (grade 1 in 1 each. Results Type I were those located in the diaphysis, and reconstruction was performed using a vascularized fibular graft(vascularized fibular graft. Type 2 were those located in contact with the epiphyseal line or within 1 cm from this line, and VFG was performed in 1, and distraction osteogenesis in 1. Type III were those extending from the diaphysis to the epiphysis beyond the epiphyseal line, and a Growing Kotz was mainly used in 10 patients. The mean functional assessment score was the highest for Type I (96%: n = 4 according to the type and for VFG (99% according to the reconstruction method. Conclusion The final functional results were the most satisfactory for Types I and II according to tumor location. Biological reconstruction such as VFG and distraction osteogenesis without a prosthesis are so high score in the MSTS rating system. Therefore, considering the function of the affected limb, a limb reconstruction method allowing the maximal preservation of joint function should be selected after careful evaluation of the effects of chemotherapy and the location of the tumor.

  10. Characteristics and method of synthesis seismic wave based on wavelet reconstruction

    Institute of Scientific and Technical Information of China (English)

    ZOU Li-hua; LIU Ai-ping; YANG Hong; CHAI Xin-jian; SHANG Xin; DAI Su-liang; DONG Bo

    2007-01-01

    A novel method of synthesizing seismic wave using wavelet reconstruction is proposed and compared with the traditional method of using theory of Fourier transform. By adjusting the frequency band energy and taking it as criterion, the formula of synthesizing seismic wave is deduced. Using the design parameters specified in Chinese Seismic Design Code for buildings, seismic waves are synthesized. Moreover, the method of selecting wavelet bases in synthesizing seismic wave and the influence of the damping ratio on synthesizing results are analyzed.The results show that the synthesis seismic waves using wavelet bases can represent the characteristics of the seismic wave as well as the ground characteristic period, and have good time-frequency non-stationary.

  11. The Helmholtz equation least squares method for reconstructing and predicting acoustic radiation

    CERN Document Server

    Wu, Sean F

    2015-01-01

    This book gives a comprehensive introduction to the Helmholtz Equation Least Squares (HELS) method and its use in diagnosing noise and vibration problems. In contrast to the traditional NAH technologies, the HELS method does not seek an exact solution to the acoustic field produced by an arbitrarily shaped structure. Rather, it attempts to obtain the best approximation of an acoustic field through the expansion of certain basis functions. Therefore, it significantly simplifies the complexities of the reconstruction process, yet still enables one to acquire an understanding of the root causes of different noise and vibration problems that involve arbitrarily shaped surfaces in non-free space using far fewer measurement points than either Fourier acoustics or BEM based NAH. The examples given in this book illustrate that the HELS method may potentially become a practical and versatile tool for engineers to tackle a variety of complex noise and vibration issues in engineering applications.

  12. 3D shape reconstruction of medical images using a perspective shape-from-shading method

    Science.gov (United States)

    Yang, Lei; Han, Jiu-qiang

    2008-06-01

    A 3D shape reconstruction approach for medical images using a shape-from-shading (SFS) method was proposed in this paper. A new reflectance map equation of medical images was analyzed with the assumption that the Lambertian reflectance surface was irradiated by a point light source located at the light center and the image was formed under perspective projection. The corresponding static Hamilton-Jacobi (H-J) equation of the reflectance map equation was established. So the shape-from-shading problem turned into solving the viscosity solution of the static H-J equation. Then with the conception of a viscosity vanishing approximation, the Lax-Friedrichs fast sweeping numerical method was used to compute the viscosity solution of the H-J equation and a new iterative SFS algorithm was gained. Finally, experiments on both synthetic images and real medical images were performed to illustrate the efficiency of the proposed SFS method.

  13. A method for precise charge reconstruction with pixel detectors using binary hit information

    CERN Document Server

    Pohl, David-Leon; Hemperek, Tomasz; Hügging, Fabian; Wermes, Norbert

    2014-01-01

    A method is presented to precisely reconstruct charge spectra with pixel detectors using binary hit information of individual pixels. The method is independent of the charge information provided by the readout circuitry and has a resolution mainly limited by the electronic noise. It relies on the ability to change the detection threshold in small steps while counting hits from a particle source. The errors are addressed and the performance of the method is shown based on measurements with the ATLAS pixel chip FE-I4 bump bonded to a 230 {\\mu}m 3D-silicon sensor. Charge spectra from radioactive sources and from electron beams are presented serving as examples. It is demonstrated that a charge resolution ({\\sigma}<200 e) close to the electronic noise of the ATLAS FE-I4 pixel chip can be achieved.

  14. Stereoscopic vision-based robotic manipulator extraction method for enhanced soft tissue reconstruction.

    Science.gov (United States)

    Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2013-01-01

    The availability of digital stereoscopic video feedback on surgical robotic platforms allows for a variety of enhancements through the application of computer vision. Several of these enhancements, such as augmented reality and semi-automated surgery, benefit significantly from identification of the robotic manipulators within the field of view. A method is presented for the extraction of robotic manipulators from stereoscopic views of the operating field that uses a combination of marker tracking, inverse kinematics, and computer rendering. This method is shown to accurately identify the locations of the manipulators within the views. It is further demonstrated that this method can be used to enhance 3D reconstruction of the operating field and produce augmented views.

  15. An efficient de-convolution reconstruction method for spatiotemporal-encoding single-scan 2D MRI.

    Science.gov (United States)

    Cai, Congbo; Dong, Jiyang; Cai, Shuhui; Li, Jing; Chen, Ying; Bao, Lijun; Chen, Zhong

    2013-03-01

    Spatiotemporal-encoding single-scan MRI method is relatively insensitive to field inhomogeneity compared to EPI method. Conjugate gradient (CG) method has been used to reconstruct super-resolved images from the original blurred ones based on coarse magnitude-calculation. In this article, a new de-convolution reconstruction method is proposed. Through removing the quadratic phase modulation from the signal acquired with spatiotemporal-encoding MRI, the signal can be described as a convolution of desired super-resolved image and a point spread function. The de-convolution method proposed herein not only is simpler than the CG method, but also provides super-resolved images with better quality. This new reconstruction method may make the spatiotemporal-encoding 2D MRI technique more valuable for clinic applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Radar Constant-Modulus Waveform Design with Prior Information of the Extended Target and Clutter.

    Science.gov (United States)

    Yue, Wenzhen; Zhang, Yan; Liu, Yimin; Xie, Jingwen

    2016-06-17

    Radar waveform design is of great importance for radar system performances and has drawn considerable attention recently. Constant modulus is an important waveform design consideration, both from the point of view of hardware realization and to allow for full utilization of the transmitter's power. In this paper, we consider the problem of constant-modulus waveform design for extended target detection with prior information about the extended target and clutter. At first, we propose an arbitrary-phase unimodular waveform design method via joint transmitter-receiver optimization. We exploit a semi-definite relaxation technique to transform an intractable non-convex problem into a convex problem, which can then be efficiently solved. Furthermore, quadrature phase shift keying waveform is designed, which is easier to implement than arbitrary-phase waveforms. Numerical results demonstrate the effectiveness of the proposed methods.

  17. Magnetic anomaly inversion using magnetic dipole reconstruction based on the pipeline section segmentation method

    Science.gov (United States)

    Pan, Qi; Liu, De-Jun; Guo, Zhi-Yong; Fang, Hua-Feng; Feng, Mu-Qun

    2016-06-01

    In the model of a horizontal straight pipeline of finite length, the segmentation of the pipeline elements is a significant factor in the accuracy and rapidity of the forward modeling and inversion processes, but the existing pipeline segmentation method is very time-consuming. This paper proposes a section segmentation method to study the characteristics of pipeline magnetic anomalies—and the effect of model parameters on these magnetic anomalies—as a way to enhance computational performance and accelerate the convergence process of the inversion. Forward models using the piece segmentation method and section segmentation method based on magnetic dipole reconstruction (MDR) are established for comparison. The results show that the magnetic anomalies calculated by these two segmentation methods are almost the same regardless of different measuring heights and variations of the inclination and declination of the pipeline. In the optimized inversion procedure the results of the simulation data calculated by these two methods agree with the synthetic data from the original model, and the inversion accuracies of the burial depths of the two methods are approximately equal. The proposed method is more computationally efficient than the piece segmentation method—in other words, the section segmentation method can meet the requirements for precision in the detection of pipelines by magnetic anomalies and reduce the computation time of the whole process.

  18. An MSK Waveform for Radar Applications

    Science.gov (United States)

    Quirk, Kevin J.; Srinivasan, Meera

    2009-01-01

    We introduce a minimum shift keying (MSK) waveform developed for use in radar applications. This waveform is characterized in terms of its spectrum, autocorrelation, and ambiguity function, and is compared with the conventionally used bi-phase coded (BPC) radar signal. It is shown that the MSK waveform has several advantages when compared with the BPC waveform, and is a better candidate for deep-space radar imaging systems such as NASA's Goldstone Solar System Radar.

  19. Radar Waveform Design in Active Communications Channel

    OpenAIRE

    Ric A. Romero; Shepherd, Kevin D.

    2013-01-01

    In this paper, we investigate spectrally adaptive radar transmit waveform design and its effects on an active communication system. We specifically look at waveform design for point targets. The transmit waveform is optimized by accounting for the modulation spectrum of the communication system while trying to efficiently use the remaining spectrum. With the use of spectrally-matched radar waveform, we show that the SER detection performance of the communication system ...

  20. Study of reconstruction methods for a time projection chamber with GEM gas amplification system

    Energy Technology Data Exchange (ETDEWEB)

    Diener, R.

    2006-12-15

    A new e{sup +}e{sup -} linear collider with an energy range up to 1TeV is planned in an international collaboration: the International Linear Collider (ILC). This collider will be able to do precision measurements of the Higgs particle and of physics beyond the Standard Model. In the Large Detector Concept (LDC) - which is one proposal for a detector at the ILC - a Time Projection Chamber (TPC) is foreseen as the main tracking device. To meet the requirements on the resolution and to be able to work in the environment at the ILC, the application of new gas amplification technologies in the TPC is necessary. One option is an amplification system based on Gas Electron Multipliers (GEMs). Due to the - in comparison with older technologies - small spatial width of the signals, this technology poses new requirements on the readout structures and the reconstruction methods. In this work, the performance and the systematics of different reconstruction methods have been studied, based on data measured with a TPC prototype in high magnetic fields of up to 4T and data from a Monte Carlo simulation. The latest results of the achievable point resolution are presented and their limitations have been investigated. (orig.)

  1. Community Phylogenetics: Assessing Tree Reconstruction Methods and the Utility of DNA Barcodes.

    Science.gov (United States)

    Boyle, Elizabeth E; Adamowicz, Sarah J

    2015-01-01

    Studies examining phylogenetic community structure have become increasingly prevalent, yet little attention has been given to the influence of the input phylogeny on metrics that describe phylogenetic patterns of co-occurrence. Here, we examine the influence of branch length, tree reconstruction method, and amount of sequence data on measures of phylogenetic community structure, as well as the phylogenetic signal (Pagel's λ) in morphological traits, using Trichoptera larval communities from Churchill, Manitoba, Canada. We find that model-based tree reconstruction methods and the use of a backbone family-level phylogeny improve estimations of phylogenetic community structure. In addition, trees built using the barcode region of cytochrome c oxidase subunit I (COI) alone accurately predict metrics of phylogenetic community structure obtained from a multi-gene phylogeny. Input tree did not alter overall conclusions drawn for phylogenetic signal, as significant phylogenetic structure was detected in two body size traits across input trees. As the discipline of community phylogenetics continues to expand, it is important to investigate the best approaches to accurately estimate patterns. Our results suggest that emerging large datasets of DNA barcode sequences provide a vast resource for studying the structure of biological communities.

  2. Hadron energy reconstruction for the ATLAS calorimetry in the framework of the nonparametrical method

    CERN Document Server

    Akhmadaliev, S Z; Ambrosini, G; Amorim, A; Anderson, K; Andrieux, M L; Aubert, Bernard; Augé, E; Badaud, F; Baisin, L; Barreiro, F; Battistoni, G; Bazan, A; Bazizi, K; Belymam, A; Benchekroun, D; Berglund, S R; Berset, J C; Blanchot, G; Bogush, A A; Bohm, C; Boldea, V; Bonivento, W; Bosman, M; Bouhemaid, N; Breton, D; Brette, P; Bromberg, C; Budagov, Yu A; Burdin, S V; Calôba, L P; Camarena, F; Camin, D V; Canton, B; Caprini, M; Carvalho, J; Casado, M P; Castillo, M V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Chadelas, R; Chalifour, M; Chekhtman, A; Chevalley, J L; Chirikov-Zorin, I E; Chlachidze, G; Citterio, M; Cleland, W E; Clément, C; Cobal, M; Cogswell, F; Colas, Jacques; Collot, J; Cologna, S; Constantinescu, S; Costa, G; Costanzo, D; Crouau, M; Daudon, F; David, J; David, M; Davidek, T; Dawson, J; De, K; de La Taille, C; Del Peso, J; Del Prete, T; de Saintignon, P; Di Girolamo, B; Dinkespiler, B; Dita, S; Dodd, J; Dolejsi, J; Dolezal, Z; Downing, R; Dugne, J J; Dzahini, D; Efthymiopoulos, I; Errede, D; Errede, S; Evans, H; Eynard, G; Fassi, F; Fassnacht, P; Ferrari, A; Ferrer, A; Flaminio, Vincenzo; Fournier, D; Fumagalli, G; Gallas, E; Gaspar, M; Giakoumopoulou, V; Gianotti, F; Gildemeister, O; Giokaris, N; Glagolev, V; Glebov, V Yu; Gomes, A; González, V; González de la Hoz, S; Grabskii, V; Graugès-Pous, E; Grenier, P; Hakopian, H H; Haney, M; Hébrard, C; Henriques, A; Hervás, L; Higón, E; Holmgren, Sven Olof; Hostachy, J Y; Hoummada, A; Huston, J; Imbault, D; Ivanyushenkov, Yu M; Jézéquel, S; Johansson, E K; Jon-And, K; Jones, R; Juste, A; Kakurin, S; Karyukhin, A N; Khokhlov, Yu A; Khubua, J I; Klioukhine, V I; Kolachev, G M; Kopikov, S V; Kostrikov, M E; Kozlov, V; Krivkova, P; Kukhtin, V V; Kulagin, M; Kulchitskii, Yu A; Kuzmin, M V; Labarga, L; Laborie, G; Lacour, D; Laforge, B; Lami, S; Lapin, V; Le Dortz, O; Lefebvre, M; Le Flour, T; Leitner, R; Leltchouk, M; Li, J; Liablin, M V; Linossier, O; Lissauer, D; Lobkowicz, F; Lokajícek, M; Lomakin, Yu F; López-Amengual, J M; Lund-Jensen, B; Maio, A; Makowiecki, D S; Malyukov, S N; Mandelli, L; Mansoulié, B; Mapelli, Livio P; Marin, C P; Marrocchesi, P S; Marroquim, F; Martin, P; Maslennikov, A L; Massol, N; Mataix, L; Mazzanti, M; Mazzoni, E; Merritt, F S; Michel, B; Miller, R; Minashvili, I A; Miralles, L; Mnatzakanian, E A; Monnier, E; Montarou, G; Mornacchi, Giuseppe; Moynot, M; Muanza, G S; Nayman, P; Némécek, S; Nessi, Marzio; Nicoleau, S; Niculescu, M; Noppe, J M; Onofre, A; Pallin, D; Pantea, D; Paoletti, R; Park, I C; Parrour, G; Parsons, J; Pereira, A; Perini, L; Perlas, J A; Perrodo, P; Pilcher, J E; Pinhão, J; Plothow-Besch, Hartmute; Poggioli, Luc; Poirot, S; Price, L; Protopopov, Yu; Proudfoot, J; Puzo, P; Radeka, V; Rahm, David Charles; Reinmuth, G; Renzoni, G; Rescia, S; Resconi, S; Richards, R; Richer, J P; Roda, C; Rodier, S; Roldán, J; Romance, J B; Romanov, V; Romero, P; Rossel, F; Rusakovitch, N A; Sala, P; Sanchis, E; Sanders, H; Santoni, C; Santos, J; Sauvage, D; Sauvage, G; Sawyer, L; Says, L P; Schaffer, A C; Schwemling, P; Schwindling, J; Seguin-Moreau, N; Seidl, W; Seixas, J M; Selldén, B; Seman, M; Semenov, A; Serin, L; Shaldaev, E; Shochet, M J; Sidorov, V; Silva, J; Simaitis, V J; Simion, S; Sissakian, A N; Snopkov, R; Söderqvist, J; Solodkov, A A; Soloviev, A; Soloviev, I V; Sonderegger, P; Soustruznik, K; Spanó, F; Spiwoks, R; Stanek, R; Starchenko, E A; Stavina, P; Stephens, R; Suk, M; Surkov, A; Sykora, I; Takai, H; Tang, F; Tardell, S; Tartarelli, F; Tas, P; Teiger, J; Thaler, J; Thion, J; Tikhonov, Yu A; Tisserant, S; Tokar, S; Topilin, N D; Trka, Z; Turcotte, M; Valkár, S; Varanda, M J; Vartapetian, A H; Vazeille, F; Vichou, I; Vinogradov, V; Vorozhtsov, S B; Vuillemin, V; White, A; Wielers, M; Wingerter-Seez, I; Wolters, H; Yamdagni, N; Yosef, C; Zaitsev, A; Zitoun, R; Zolnierowski, Y

    2002-01-01

    This paper discusses hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter (consisting of a lead-liquid argon electromagnetic part and an iron-scintillator hadronic part) in the framework of the nonparametrical method. The nonparametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to an easy use in a first level trigger. The reconstructed mean values of the hadron energies are within +or-1% of the true values and the fractional energy resolution is [(58+or-3)%/ square root E+(2.5+or-0.3)%](+)(1.7+or-0.2)/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74+or-0.04 and agrees with the prediction that e/h >1.66 for this electromagnetic calorimeter. Results of a study of the longitudinal hadronic shower development are also presented. The data have been taken in the H8 beam...

  3. A new method for reconstruction of cross-sections using Tucker decomposition

    Science.gov (United States)

    Luu, Thi Hieu; Maday, Yvon; Guillo, Matthieu; Guérin, Pierre

    2017-09-01

    The full representation of a d-variate function requires exponentially storage size as a function of dimension d and high computational cost. In order to reduce these complexities, function approximation methods (called reconstruction in our context) are proposed, such as: interpolation, approximation, etc. The traditional interpolation model like the multilinear one, has this dimensionality problem. To deal with this problem, we propose a new model based on the Tucker format - a low-rank tensor approximation method, called here the Tucker decomposition. The Tucker decomposition is built as a tensor product of one-dimensional spaces where their one-variate basis functions are constructed by an extension of the Karhunen-Loève decomposition into high-dimensional space. Using this technique, we can acquire, direction by direction, the most important information of the function and convert it into a small number of basis functions. Hence, the approximation for a given function needs less data than that of the multilinear model. Results of a test case on the neutron cross-section reconstruction demonstrate that the Tucker decomposition achieves a better accuracy while using less data than the multilinear interpolation.

  4. Best waveform score for diagnosing keratoconus

    Directory of Open Access Journals (Sweden)

    Allan Luz

    2013-12-01

    Full Text Available PURPOSE: To test whether corneal hysteresis (CH and corneal resistance factor (CRF can discriminate between keratoconus and normal eyes and to evaluate whether the averages of two consecutive measurements perform differently from the one with the best waveform score (WS for diagnosing keratoconus. METHODS: ORA measurements for one eye per individual were selected randomly from 53 normal patients and from 27 patients with keratoconus. Two groups were considered the average (CH-Avg, CRF-Avg and best waveform score (CH-WS, CRF-WS groups. The Mann-Whitney U-test was used to evaluate whether the variables had similar distributions in the Normal and Keratoconus groups. Receiver operating characteristics (ROC curves were calculated for each parameter to assess the efficacy for diagnosing keratoconus and the same obtained for each variable were compared pairwise using the Hanley-McNeil test. RESULTS: The CH-Avg, CRF-Avg, CH-WS and CRF-WS differed significantly between the normal and keratoconus groups (p<0.001. The areas under the ROC curve (AUROC for CH-Avg, CRF-Avg, CH-WS, and CRF-WS were 0.824, 0.873, 0.891, and 0.931, respectively. CH-WS and CRF-WS had significantly better AUROCs than CH-Avg and CRF-Avg, respectively (p=0.001 and 0.002. CONCLUSION: The analysis of the biomechanical properties of the cornea through the ORA method has proved to be an important aid in the diagnosis of keratoconus, regardless of the method used. The best waveform score (WS measurements were superior to the average of consecutive ORA measurements for diagnosing keratoconus.

  5. Generating nonlinear FM chirp waveforms for radar.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Nonlinear FM waveforms offer a radar matched filter output with inherently low range sidelobes. This yields a 1-2 dB advantage in Signal-to-Noise Ratio over the output of a Linear FM waveform with equivalent sidelobe filtering. This report presents design and implementation techniques for Nonlinear FM waveforms.

  6. Reconstruction of two-dimensional magnetopause structures from Cluster observations: verification of method

    Directory of Open Access Journals (Sweden)

    H. Hasegawa

    2004-04-01

    Full Text Available A recently developed technique for reconstructing approximately two-dimensional (∂/∂z≈0, time-stationary magnetic field structures in space is applied to two magnetopause traversals on the dawnside flank by the four Cluster spacecraft, when the spacecraft separation was about 2000km. The method consists of solving the Grad-Shafranov equation for magnetohydrostatic structures, using plasma and magnetic field data measured along a single spacecraft trajectory as spatial initial values. We assess the usefulness of this single-spacecraft-based technique by comparing the magnetic field maps produced from one spacecraft with the field vectors that other spacecraft actually observed. For an optimally selected invariant (z-axis, the correlation between the field components predicted from the reconstructed map and the corresponding measured components reaches more than 0.97. This result indicates that the reconstruction technique predicts conditions at the other spacecraft locations quite well.

    The optimal invariant axis is relatively close to the intermediate variance direction, computed from minimum variance analysis of the measured magnetic field, and is generally well determined with respect to rotations about the maximum variance direction but less well with respect to rotations about the minimum variance direction. In one of the events, field maps recovered individually for two of the spacecraft, which crossed the magnetopause with an interval of a few tens of seconds, show substantial differences in configuration. By comparing these field maps, time evolution of the magnetopause structures, such as the formation of magnetic islands, motion of the structures, and thickening of the magnetopause current layer, is discussed.

    Key words. Magnetospheric physics (Magnetopause, cusp, and boundary layers – Space plasma physics (Experimental and mathematical techniques, Magnetic reconnection

  7. High resolution image reconstruction method for a double-plane PET system with changeable spacing

    Science.gov (United States)

    Gu, Xiao-Yue; Zhou, Wei; Li, Lin; Wei, Long; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao

    2016-05-01

    Breast-dedicated positron emission tomography (PET) imaging techniques have been developed in recent years. Their capacities to detect millimeter-sized breast tumors have been the subject of many studies. Some of them have been confirmed with good results in clinical applications. With regard to biopsy application, a double-plane detector arrangement is practicable, as it offers the convenience of breast immobilization. However, the serious blurring effect of the double-plane PET, with changeable spacing for different breast sizes, should be studied. We investigated a high resolution reconstruction method applicable for a double-plane PET. The distance between the detector planes is changeable. Geometric and blurring components were calculated in real-time for different detector distances, and accurate geometric sensitivity was obtained with a new tube area model. Resolution recovery was achieved by estimating blurring effects derived from simulated single gamma response information. The results showed that the new geometric modeling gave a more finite and smooth sensitivity weight in the double-plane PET. The blurring component yielded contrast recovery levels that could not be reached without blurring modeling, and improved visual recovery of the smallest spheres and better delineation of the structures in the reconstructed images were achieved with the blurring component. Statistical noise had lower variance at the voxel level with blurring modeling at matched resolution, compared to without blurring modeling. In distance-changeable double-plane PET, finite resolution modeling during reconstruction achieved resolution recovery, without noise amplification. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  8. High Resolution Image Reconstruction Method for a Double-plane PET System with Changeable Spacing

    CERN Document Server

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao; Wei, Long

    2015-01-01

    Positron Emission Mammography (PEM) imaging systems with the ability in detection of millimeter-sized tumors were developed in recent years. And some of them have been well used in clinical applications. In consideration of biopsy application