WorldWideScience

Sample records for filtered backprojection algorithm

  1. Filtered backprojection algorithm in RPCs based PET

    International Nuclear Information System (INIS)

    Cruceru, Ilie; Manea Ioana; Nicorescu, Carmen; Constantin Florin

    2003-01-01

    The basis of PET consists in administration of a radioactive isotope attached to a tracer that permits to reveal its molecular pathways in the human body. A 3-D Whole-Body-Scan is necessary in order to minimize the radiation exposure of the patient and to increase significantly the axial field of view (FOV). A major candidate for gamma pair detection in 3-D Whole-Body-Scan appear to be the RPCs (Resistive Plate Counters). They consist in a longitudinal microstrip grid 15 mm thick, spaced at 1 mm; the grid is placed between a large electric resistive glass anode (ρ = 10 12 Ωcm) and an aluminium cathode; the gap of around 300 μm is filled with a special gas and is polarized at around 6 kV. Several detecting structures based on Resistive Plate Counters (RPCs) are evaluated for use in a positron emission 3-Dimensional Whole-Body-Scan tomograph. The coincidence matrix is built for the specific detecting structure by means of random gamma pair ray generation and then the filtered backprojection algorithm is used to reconstruct the original picture. The accuracy of image reconstruction is examined for the four different detecting structures. (authors)

  2. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  3. A filtered backprojection reconstruction algorithm for Compton camera

    Energy Technology Data Exchange (ETDEWEB)

    Lojacono, Xavier; Maxim, Voichita; Peyrin, Francoise; Prost, Remy [Lyon Univ., Villeurbanne (France). CNRS, Inserm, INSA-Lyon, CREATIS, UMR5220; Zoglauer, Andreas [California Univ., Berkeley, CA (United States). Space Sciences Lab.

    2011-07-01

    In this paper we present a filtered backprojection reconstruction algorithm for Compton Camera detectors of particles. Compared to iterative methods, widely used for the reconstruction of images from Compton camera data, analytical methods are fast, easy to implement and avoid convergence issues. The method we propose is exact for an idealized Compton camera composed of two parallel plates of infinite dimension. We show that it copes well with low number of detected photons simulated from a realistic device. Images reconstructed from both synthetic data and realistic ones obtained with Monte Carlo simulations demonstrate the efficiency of the algorithm. (orig.)

  4. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  5. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  6. A cone-beam reconstruction algorithm using shift-variant filtering and cone-beam backprojection

    International Nuclear Information System (INIS)

    Defrise, M.; Clack, R.

    1994-01-01

    An exact inversion formula written in the form of shift-variant filtered-backprojection (FBP) is given for reconstruction from cone-beam data taken from any orbit satisfying Tuy's sufficiency conditions. The method is based on a result of Grangeat, involving the derivative of the three-dimensional (3-D) Radon transform, but unlike Grangeat's algorithm, no 3D rebinning step is required. Data redundancy, which occurs when several cone-beam projections supply the same values in the Radon domain, is handled using an elegant weighting function and without discarding data. The algorithm is expressed in a convenient cone-beam detector reference frame, and a specific example for the case of a dual orthogonal circular orbit is presented. When the method is applied to a single circular orbit, it is shown to be equivalent to the well-known algorithm of Feldkamp et al

  7. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  8. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  9. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    International Nuclear Information System (INIS)

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior

  10. Fan-beam filtered-backprojection reconstruction without backprojection weight

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Noo, Frederic; Hornegger, Joachim; Lauritsch, Guenter

    2007-01-01

    In this paper, we address the problem of two-dimensional image reconstruction from fan-beam data acquired along a full 2π scan. Conventional approaches that follow the filtered-backprojection (FBP) structure require a weighted backprojection with the weight depending on the point to be reconstructed and also on the source position; this weight appears only in the case of divergent beam geometries. Compared to reconstruction from parallel-beam data, the backprojection weight implies an increase in computational effort and is also thought to have some negative impacts on noise properties of the reconstructed images. We demonstrate here that direct FBP reconstruction from full-scan fan-beam data is possible with no backprojection weight. Using computer-simulated, realistic fan-beam data, we compared our novel FBP formula with no backprojection weight to the use of an FBP formula based on equal weighting of all data. Comparisons in terms of signal-to-noise ratio, spatial resolution and computational efficiency are presented. These studies show that the formula we suggest yields images with a reduced noise level, at almost identical spatial resolution. This effect increases quickly with the distance from the center of the field of view, from 0% at the center to 20% less noise at 20 cm, and to 40% less noise at 25 cm. Furthermore, the suggested method is computationally less demanding and reduces computation time with a gain that was found to vary between 12% and 43% on the computers used for evaluation

  11. Backprojection filtering for variable orbit fan-beam tomography

    International Nuclear Information System (INIS)

    Gullberg, G.T.; Zeng, G.L.

    1995-01-01

    Backprojection filtering algorithms are presented for three variable Orbit fan-beam geometries. Expressions for the fan beam projection and backprojection operators are given for a flat detector fan-beam geometry with fixed focal length, with variable focal length, and with fixed focal length and off-center focusing. Backprojection operators are derived for each geometry using transformation of coordinates to transform from a parallel geometry backprojector to a fan-beam backprojector for the appropriate geometry. The backprojection operator includes a factor which is a function of the coordinates of the projection ray and the coordinates of the pixel in the backprojected image. The backprojection filtering algorithm first backprojects the variable orbit fan-beam projection data using the appropriately derived backprojector to obtain a 1/r blurring of the original image then takes the two-dimensional (2D) Fast Fourier Transform (FFT) of the backprojected image, then multiples the transformed image by the 2D ramp filter function, and finally takes the inverse 2D FFT to obtain the reconstructed image. Computer simulations verify that backprojectors with appropriate weighting give artifact free reconstructions of simulated line integral projections. Also, it is shown that it is not necessary to assume a projection model of line integrals, but the projector and backprojector can be defined to model the physics of the imaging detection process. A backprojector for variable orbit fan-beam tomography with fixed focal length is derived which includes an additional factor which is a function of the flux density along the flat detector. It is shown that the impulse response for the composite of the projection and backprojection operations is equal to 1/r

  12. Generalized Filtered Back-Projection for Digital Breast Tomosynthesis Reconstruction

    NARCIS (Netherlands)

    Erhard, K.; Grass, M.; Hitziger, S.; Iske, A.; Nielsen, T.

    2012-01-01

    Filtered back-projection (FBP) has been commonly used as an efficient and robust reconstruction technique in tomographic X-ray imagingduring the last decades. For limited angle tomography acquisitions such as digital breast tomosynthesis, however, standard FBP reconstruction algorithms provide poor

  13. Improving Filtered Backprojection Reconstruction by Data-Dependent Filtering

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2014-01-01

    htmlabstractFiltered backprojection, one of the most widely used reconstruction methods in tomography, requires a large number of low-noise projections to yield accurate reconstructions. In many applications of tomography, complete projection data of high quality cannot be obtained, because of

  14. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning

    International Nuclear Information System (INIS)

    Tang Xiangyang; Hsieh Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-01-01

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 deg.) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  15. Filtered backprojection proton CT reconstruction along most likely paths

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, 69008 Lyon (France)

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  16. A new hybrid-FBP inversion algorithm with inverse distance backprojection weight for CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Narasimhadhan, A.V.; Rajgopal, Kasi

    2011-07-01

    This paper presents a new hybrid filtered backprojection (FBP) algorithm for fan-beam and cone-beam scan. The hybrid reconstruction kernel is the sum of the ramp and Hilbert filters. We modify the redundancy weighting function to reduce the inverse square distance weighting in the backprojection to inverse distance weight. The modified weight also eliminates the derivative associated with the Hilbert filter kernel. Thus, the proposed reconstruction algorithm has the advantages of the inverse distance weight in the backprojection. We evaluate the performance of the new algorithm in terms of the magnitude level and uniformity in noise for the fan-beam geometry. The computer simulations show that the spatial resolution is nearly identical to the standard fan-beam ramp filtered algorithm while the noise is spatially uniform and the noise variance is reduced. (orig.)

  17. Data-parallel tomographic reconstruction : A comparison of filtered backprojection and direct Fourier reconstruction

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Westenberg, M.A

    1998-01-01

    We consider the parallelization of two standard 2D reconstruction algorithms, filtered backprojection and direct Fourier reconstruction, using the data-parallel programming style. The algorithms are implemented on a Connection Machine CM-5 with 16 processors and a peak performance of 2 Gflop/s.

  18. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    International Nuclear Information System (INIS)

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  19. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Man Zhang

    2017-10-01

    Full Text Available Precise azimuth-variant motion compensation (MOCO is an essential and difficult task for high-resolution synthetic aperture radar (SAR imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA, have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  20. Measurement of vascular wall attenuation: Comparison of CT angiography using model-based iterative reconstruction with standard filtered back-projection algorithm CT in vitro

    International Nuclear Information System (INIS)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2012-01-01

    Objectives: To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. Study design: After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Results: Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P = 0.1606) or among the 3 densities of intravascular contrast material (MBIR, P = 0.8185; Detail kernel, P = 0.0802). Conclusions: Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation.

  1. Measurement of vascular wall attenuation: comparison of CT angiography using model-based iterative reconstruction with standard filtered back-projection algorithm CT in vitro.

    Science.gov (United States)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2012-11-01

    To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  3. A cone-beam tomography system with a reduced size planar detector: A backprojection-filtration reconstruction algorithm as well as numerical and practical experiments

    International Nuclear Information System (INIS)

    Li Liang; Chen Zhiqiang; Zhang Li; Xing Yuxiang; Kang Kejun

    2007-01-01

    In a traditional cone-beam computed tomography (CT) system, the cost of product and computation is very high. In this paper, we develop a transversely truncated cone-beam X-ray CT system with a reduced-size detector positioned off-center, in which X-ray beams only cover half of the object. The existing filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms are not directly applicable in this new system. Hence, we develop a BPF-type direct backprojection algorithm. Different from the traditional rebinning methods, our algorithm directly backprojects the pretreated projection data without rebinning. This makes the algorithm compact and computationally more efficient. Because of avoiding interpolation errors of rebinning process, higher spatial resolution is obtained. Finally, some numerical simulations and practical experiments are done to validate the proposed algorithm and compare with rebinning algorithm

  4. Dual filtered backprojection for micro-rotation confocal microscopy

    International Nuclear Information System (INIS)

    Laksameethanasan, Danai; Brandt, Sami S; Renaud, Olivier; Shorte, Spencer L

    2009-01-01

    Micro-rotation confocal microscopy is a novel optical imaging technique which employs dielectric fields to trap and rotate individual cells to facilitate 3D fluorescence imaging using a confocal microscope. In contrast to computed tomography (CT) where an image can be modelled as parallel projection of an object, the ideal confocal image is recorded as a central slice of the object corresponding to the focal plane. In CT, the projection images and the 3D object are related by the Fourier slice theorem which states that the Fourier transform of a CT image is equal to the central slice of the Fourier transform of the 3D object. In the micro-rotation application, we have a dual form of this setting, i.e. the Fourier transform of the confocal image equals the parallel projection of the Fourier transform of the 3D object. Based on the observed duality, we present here the dual of the classical filtered back projection (FBP) algorithm and apply it in micro-rotation confocal imaging. Our experiments on real data demonstrate that the proposed method is a fast and reliable algorithm for the micro-rotation application, as FBP is for CT application

  5. Metal artefact reduction for a dental cone beam CT image using image segmentation and backprojection filters

    International Nuclear Information System (INIS)

    Mohammadi, Mahdi; Khotanlou, Hassan; Mohammadi, Mohammad

    2011-01-01

    Full text: Due to low dose delivery and fast scanning, the dental Cone Beam CT (CBCT) is the latest technology being implanted for a range of dental imaging. The presence of metallic objects including amalgam or gold fillings in the mouth produces an intuitive image for human jaws. The feasibility of a fast and accurate approach for metal artefact reduction for dental CBCT is investigated. The current study investigates the metal artefact reduction using image segmentation and modification of several sinigrams. In order to reduce metal effects such as beam hardening, streak artefact and intense noises, the application of several algorithms is evaluated. The proposed method includes three stages: preprocessing, reconstruction and post-processing. In the pre-processing stage, in order to reduce the noise level, several phase and frequency filters were applied. At the second stage, based on the specific sinogram achieved for each segment, spline interpolation and weighting backprojection filters were applied to reconstruct the original image. A three-dimensional filter was then applied on reconstructed images, to improve the image quality. Results showed that compared to other available filters, standard frequency filters have a significant influence in the preprocessing stage (ΔHU = 48 ± 6). In addition, with the streak artefact, the probability of beam hardening artefact increases. t e post-processing stage, the application of three-dimensional filters improves the quality of reconstructed images (See Fig. I). Conclusion The proposed method reduces metal artefacts especially where there are more than one metal implanted in the region of interest.

  6. Filtered backprojection for modifying the impulse response of circular tomosynthesis

    International Nuclear Information System (INIS)

    Stevens, Grant M.; Fahrig, Rebecca; Pelc, Norbert J.

    2001-01-01

    A filtering technique has been developed to modify the three-dimensional impulse response of circular motion tomosynthesis to allow the generation of images whose appearance is like those of some other imaging geometries. In particular, this technique can reconstruct images with a blurring function which is more homogeneous for off-focal plane objects than that from circular tomosynthesis. In this paper, we describe the filtering process, and demonstrate the ability to alter the impulse response in circular motion tomosynthesis from a ring to a disk. This filtering may be desirable because the blurred out-of-plane objects appear less structured

  7. Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection

    Science.gov (United States)

    Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan

    2018-03-01

    X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400  ×  400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.

  8. Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data

    Science.gov (United States)

    Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.

    2013-01-01

    A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

  9. A local region of interest image reconstruction via filtered backprojection for fan-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Qi Zhihua; Chen Guanghong

    2007-01-01

    Recently, x-ray differential phase contrast computed tomography (DPC-CT) has been experimentally implemented using a conventional source combined with several gratings. Images were reconstructed using a parallel-beam reconstruction formula. However, parallel-beam reconstruction formulae are not directly applicable for a large image object where the parallel-beam approximation fails. In this note, we present a new image reconstruction formula for fan-beam DPC-CT. There are two major features in this algorithm: (1) it enables the reconstruction of a local region of interest (ROI) using data acquired from an angular interval shorter than 180 0 + fan angle and (2) it still preserves the filtered backprojection structure. Numerical simulations have been conducted to validate the image reconstruction algorithm. (note)

  10. Fan-beam and cone-beam image reconstruction via filtering the backprojection image of differentiated projection data

    International Nuclear Information System (INIS)

    Zhuang Tingliang; Leng Shuai; Nett, Brian E; Chen Guanghong

    2004-01-01

    In this paper, a new image reconstruction scheme is presented based on Tuy's cone-beam inversion scheme and its fan-beam counterpart. It is demonstrated that Tuy's inversion scheme may be used to derive a new framework for fan-beam and cone-beam image reconstruction. In this new framework, images are reconstructed via filtering the backprojection image of differentiated projection data. The new framework is mathematically exact and is applicable to a general source trajectory provided the Tuy data sufficiency condition is satisfied. By choosing a piece-wise constant function for one of the components in the factorized weighting function, the filtering kernel is one dimensional, viz. the filtering process is along a straight line. Thus, the derived image reconstruction algorithm is mathematically exact and efficient. In the cone-beam case, the derived reconstruction algorithm is applicable to a large class of source trajectories where the pi-lines or the generalized pi-lines exist. In addition, the new reconstruction scheme survives the super-short scan mode in both the fan-beam and cone-beam cases provided the data are not transversely truncated. Numerical simulations were conducted to validate the new reconstruction scheme for the fan-beam case

  11. Decoding using back-project algorithm from coded image in ICF

    International Nuclear Information System (INIS)

    Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan

    1999-01-01

    The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments

  12. Iterative reconstruction or filtered backprojection for semi-quantitative assessment of dopamine D2 receptor SPECT studies?

    International Nuclear Information System (INIS)

    Koch, Walter; Suessmair, Christine; Tatsch, Klaus; Poepperl, Gabriele

    2011-01-01

    In routine clinical practice striatal dopamine D 2 receptor binding is generally assessed using data reconstructed by filtered backprojection (FBP). The aim of this study was to investigate the use of an iterative reconstruction algorithm (ordered subset expectation maximization, OSEM) and to assess whether it may provide comparable or even better results than those obtained by standard FBP. In 56 patients with parkinsonian syndromes, single photon emission computed tomography (SPECT) scans were acquired 2 h after i.v. application of 185 MBq [ 123 I]iodobenzamide (IBZM) using a triple-head gamma camera (Siemens MS 3). The scans were reconstructed both by FBP and OSEM (3 iterations, 8 subsets) and filtered using a Butterworth filter. After attenuation correction the studies were automatically fitted to a mean template with a corresponding 3-D volume of interest (VOI) map covering striatum (S), caudate (C), putamen (P) and several reference VOIs using BRASS software. Visual assessment of the fitted studies suggests a better separation between C and P in studies reconstructed by OSEM than FBP. Unspecific background activity appears more homogeneous after iterative reconstruction. The correlation shows a good accordance of dopamine receptor binding using FBP and OSEM (intra-class correlation coefficients S: 0.87; C: 0.88; P: 0.84). Receiver-operating characteristic (ROC) analyses show comparable diagnostic power of OSEM and FBP in the differentiation between idiopathic parkinsonian syndrome (IPS) and non-IPS. Iterative reconstruction of IBZM SPECT studies for assessment of the D 2 receptors is feasible in routine clinical practice. Close correlations between FBP and OSEM data suggest that iteratively reconstructed IBZM studies allow reliable quantification of dopamine receptor binding even though a gain in diagnostic power could not be demonstrated. (orig.)

  13. A general exact method for synthesizing parallel-beam projections from cone-beam projections via filtered backprojection

    International Nuclear Information System (INIS)

    Li Liang; Chen Zhiqiang; Xing Yuxiang; Zhang Li; Kang Kejun; Wang Ge

    2006-01-01

    In recent years, image reconstruction methods for cone-beam computed tomography (CT) have been extensively studied. However, few of these studies discussed computing parallel-beam projections from cone-beam projections. In this paper, we focus on the exact synthesis of complete or incomplete parallel-beam projections from cone-beam projections. First, an extended central slice theorem is described to establish a relationship between the Radon space and the Fourier space. Then, data sufficiency conditions are proposed for computing parallel-beam projection data from cone-beam data. Using these results, a general filtered backprojection algorithm is formulated that can exactly synthesize parallel-beam projection data from cone-beam projection data. As an example, we prove that parallel-beam projections can be exactly synthesized in an angular range in the case of circular cone-beam scanning. Interestingly, this angular range is larger than that derived in the Feldkamp reconstruction framework. Numerical experiments are performed in the circular scanning case to verify our method

  14. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  15. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Grace J. [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (Canada); Stayman, J. Webster; Zbijewski, Wojciech [Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (United States); Siewerdsen, Jeffrey H., E-mail: jeff.siewerdsen@jhu.edu [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  16. A backprojection-filtration algorithm for nonstandard spiral cone-beam CT with an n-PI-window

    International Nuclear Information System (INIS)

    Yu Hengyong; Ye Yangbo; Zhao Shiying; Wang Ge

    2005-01-01

    For applications in bolus-chasing computed tomography (CT) angiography and electron-beam micro-CT, the backprojection-filtration (BPF) formula developed by Zou and Pan was recently generalized by Ye et al to reconstruct images from cone-beam data collected along a rather flexible scanning locus, including a nonstandard spiral. A major implication of the generalized BPF formula is that it can be applied for n-PI-window-based reconstruction in the nonstandard spiral scanning case. In this paper, we design an n-PI-window-based BPF algorithm, and report the numerical simulation results with the 3D Shepp-Logan phantom and Defrise disk phantom. The proposed BPF algorithm consists of three steps: cone-beam data differentiation, weighted backprojection and inverse Hilbert filtration. Our simulated results demonstrate the feasibility and merits of the proposed algorithm

  17. Signal filtering algorithm for depth-selective diffuse optical topography

    International Nuclear Information System (INIS)

    Fujii, M; Nakayama, K

    2009-01-01

    A compact filtered backprojection algorithm that suppresses the undesirable effects of skin circulation for near-infrared diffuse optical topography is proposed. Our approach centers around a depth-selective filtering algorithm that uses an inverse problem technique and extracts target signals from observation data contaminated by noise from a shallow region. The filtering algorithm is reduced to a compact matrix and is therefore easily incorporated into a real-time system. To demonstrate the validity of this method, we developed a demonstration prototype for depth-selective diffuse optical topography and performed both computer simulations and phantom experiments. The results show that the proposed method significantly suppresses the noise from the shallow region with a minimal degradation of the target signal.

  18. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  19. Comparative study of simultaneous algebraic and filtered backprojection reconstruction methods in digital tomosynthesis for nondestructive testing

    International Nuclear Information System (INIS)

    Kim, Dae Cheon; Youn, Hanbean; Kim, Seung Ho; Kim, Ho Kyung

    2015-01-01

    These algorithms have their own merits and demerits, in terms of image quality and reconstruction speed. For the industrial applications, such as multi-layer printed circuit board (PCB) inspection, the automated inspection systems require real time imaging and high spatial resolution. In this study, we quantitatively evaluate the performance of FBP and SART for planar computed tomography (pCT) systems. The performance includes the contrast, and depth resolution. These benefits will be normalized by costs, such as tube loading and speed. In order to accomplish it, further study is needed. First of all, it should be verified by experiment that the algorithm works correctly. Once we prove the algorithm is correct for the PCB phantom, then the results of reconstruction images will be compared by using metric parameters

  20. Comparative study of simultaneous algebraic and filtered backprojection reconstruction methods in digital tomosynthesis for nondestructive testing

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dae Cheon; Youn, Hanbean; Kim, Seung Ho; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2015-05-15

    These algorithms have their own merits and demerits, in terms of image quality and reconstruction speed. For the industrial applications, such as multi-layer printed circuit board (PCB) inspection, the automated inspection systems require real time imaging and high spatial resolution. In this study, we quantitatively evaluate the performance of FBP and SART for planar computed tomography (pCT) systems. The performance includes the contrast, and depth resolution. These benefits will be normalized by costs, such as tube loading and speed. In order to accomplish it, further study is needed. First of all, it should be verified by experiment that the algorithm works correctly. Once we prove the algorithm is correct for the PCB phantom, then the results of reconstruction images will be compared by using metric parameters.

  1. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    International Nuclear Information System (INIS)

    Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael

    2009-01-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. (topical review)

  2. Potency backprojection

    Science.gov (United States)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the

  3. Filtering algorithm for dotted interferences

    Energy Technology Data Exchange (ETDEWEB)

    Osterloh, K., E-mail: kurt.osterloh@bam.de [Federal Institute for Materials Research and Testing (BAM), Division VIII.3, Radiological Methods, Unter den Eichen 87, 12205 Berlin (Germany); Buecherl, T.; Lierse von Gostomski, Ch. [Technische Universitaet Muenchen, Lehrstuhl fuer Radiochemie, Walther-Meissner-Str. 3, 85748 Garching (Germany); Zscherpel, U.; Ewert, U. [Federal Institute for Materials Research and Testing (BAM), Division VIII.3, Radiological Methods, Unter den Eichen 87, 12205 Berlin (Germany); Bock, S. [Technische Universitaet Muenchen, Lehrstuhl fuer Radiochemie, Walther-Meissner-Str. 3, 85748 Garching (Germany)

    2011-09-21

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  4. Filtering algorithm for dotted interferences

    International Nuclear Information System (INIS)

    Osterloh, K.; Buecherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.

    2011-01-01

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  5. Effect of number of of projections on inverse radon transform based image reconstruction by using filtered back-projection for parallel beam transmission tomography

    International Nuclear Information System (INIS)

    Qureshi, S.A.; Mirza, S.M.; Arif, M.

    2007-01-01

    This paper present the effect of number of projections on inverse Radon transform (IRT) estimation using filtered back-projection (FBP) technique for parallel beam transmission tomography. The head phantom and the lung phantom have been used in this work. Various filters used in this study include Ram-Lak, Shepp-Logan, Cosin, Hamming and Hanning filters. The slices have been reconstructed by increasing the number of projections through parallel beam transmission tomography keeping the projections uniformly distributed. The Euclidean and Mean Squared errors and peak signal-to-noise ratio (PSNR) have been analyzed for their sensitiveness as functions of number of projections. It has found that image quality improves with the number of projections but at the cost of the computer time. The error has been minimized to get the best approximation of inverse Radon transform (IRT) as the number of projections is enhanced. The value of PSNR has been found to increase from 8.20 to 24.53 dB as the number of projections is raised from 5 to 180 for head phantom. (author)

  6. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  7. Virtual patient 3D dose reconstruction using in air EPID measurements and a back-projection algorithm for IMRT and VMAT treatments.

    Science.gov (United States)

    Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton

    2017-05-01

    At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M [Universidad de Guanajuato, Leon, Guanajuato (Mexico)

    2016-06-15

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results: A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.

  9. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  10. CF4CF: Recommending Collaborative Filtering algorithms using Collaborative Filtering

    OpenAIRE

    Cunha, Tiago; Soares, Carlos; de Carvalho, André C. P. L. F.

    2018-01-01

    Automatic solutions which enable the selection of the best algorithms for a new problem are commonly found in the literature. One research area which has recently received considerable efforts is Collaborative Filtering. Existing work includes several approaches using Metalearning, which relate the characteristics of datasets with the performance of the algorithms. This work explores an alternative approach to tackle this problem. Since, in essence, both are recommendation problems, this work...

  11. Backprojection of volcanic tremor

    Science.gov (United States)

    Haney, Matthew M.

    2014-01-01

    Backprojection has become a powerful tool for imaging the rupture process of global earthquakes. We demonstrate the ability of backprojection to illuminate and track volcanic sources as well. We apply the method to the seismic network from Okmok Volcano, Alaska, at the time of an escalation in tremor during the 2008 eruption. Although we are able to focus the wavefield close to the location of the active cone, the network array response lacks sufficient resolution to reveal kilometer-scale changes in tremor location. By deconvolving the response in successive backprojection images, we enhance resolution and find that the tremor source moved toward an intracaldera lake prior to its escalation. The increased tremor therefore resulted from magma-water interaction, in agreement with the overall phreatomagmatic character of the eruption. Imaging of eruption tremor shows that time reversal methods, such as backprojection, can provide new insights into the temporal evolution of volcanic sources.

  12. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  13. Kalman Filter Predictor and Initialization Algorithm for PRI Tracking

    National Research Council Canada - National Science Library

    Hock, Melinda

    1998-01-01

    .... The algorithm uses a Kalman filter for prediction combined with a preprocessing routine to determine the period of the stagger sequence and to construct an uncorrupted data set for Kalman filter initialization...

  14. Vectorization of linear discrete filtering algorithms

    Science.gov (United States)

    Schiess, J. R.

    1977-01-01

    Linear filters, including the conventional Kalman filter and versions of square root filters devised by Potter and Carlson, are studied for potential application on streaming computers. The square root filters are known to maintain a positive definite covariance matrix in cases in which the Kalman filter diverges due to ill-conditioning of the matrix. Vectorization of the filters is discussed, and comparisons are made of the number of operations and storage locations required by each filter. The Carlson filter is shown to be the most efficient of the filters on the Control Data STAR-100 computer.

  15. Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

    National Research Council Canada - National Science Library

    Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E

    2004-01-01

    .... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...

  16. Convergence Performance of Adaptive Algorithms of L-Filters

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2003-01-01

    Full Text Available This paper deals with convergence parameters determination of adaptive algorithms, which are used in adaptive L-filters design. Firstly the stability of adaptation process, convergence rate or adaptation time, and behaviour of convergence curve belong among basic properties of adaptive algorithms. L-filters with variety of adaptive algorithms were used to their determination. Convergence performances finding of adaptive filters is important mainly for their hardware applications, where filtration in real time or adaptation of coefficient filter with low capacity of input data are required.

  17. A Fuzzy Gravitational Search Algorithm to Design Optimal IIR Filters

    Directory of Open Access Journals (Sweden)

    Danilo Pelusi

    2018-03-01

    Full Text Available The goodness of Infinite Impulse Response (IIR digital filters design depends on pass band ripple, stop band ripple and transition band values. The main problem is defining a suitable error fitness function that depends on these parameters. This fitness function can be optimized by search algorithms such as evolutionary algorithms. This paper proposes an intelligent algorithm for the design of optimal 8th order IIR filters. The main contribution is the design of Fuzzy Inference Systems able to tune key parameters of a revisited version of the Gravitational Search Algorithm (GSA. In this way, a Fuzzy Gravitational Search Algorithm (FGSA is designed. The optimization performances of FGSA are compared with those of Differential Evolution (DE and GSA. The results show that FGSA is the algorithm that gives the best compromise between goodness, robustness and convergence rate for the design of 8th order IIR filters. Moreover, FGSA assures a good stability of the designed filters.

  18. High performance parallel backprojection on FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Pfanner, Florian; Knaup, Michael; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    Reconstruction of tomographic images, i.e., images from a Computed Tomography scanner, is a very time consuming issue. The most calculation power is needed for the backprojection step. A closer inspection shows that the algorithm for backprojection is easy to parallelize. FPGAs are able to execute many operations in the same time, so a highly parallel algorithm is a requirement for a powerful acceleration. For data flow rate maximization, we realized the backprojection in a pipelined structure with data throughput of one clock cycle. Due the hardware limitations of the FPGA, it is not possible to reconstruct the image as a whole. So it is necessary to split up the image and reconstruct these parts separately. Despite that, a reconstruction of 512 projections into a 5122 image is calculated within 13 ms on a Virtex 5 FPGA. To save hardware resources we use fixed point arithmetic with an accuracy of 23 bit for calculation. A comparison of the result image and an image, calculated with floating point arithmetic on CPU, shows that there are no differences between these images. (orig.)

  19. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  20. Collaborative filtering recommendation model based on fuzzy clustering algorithm

    Science.gov (United States)

    Yang, Ye; Zhang, Yunhua

    2018-05-01

    As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.

  1. A parallel algorithm for filtering gravitational waves from coalescing binaries

    International Nuclear Information System (INIS)

    Sathyaprakash, B.S.; Dhurandhar, S.V.

    1992-10-01

    Coalescing binary stars are perhaps the most promising sources for the observation of gravitational waves with laser interferometric gravity wave detectors. The waveform from these sources can be predicted with sufficient accuracy for matched filtering techniques to be applied. In this paper we present a parallel algorithm for detecting signals from coalescing compact binaries by the method of matched filtering. We also report the details of its implementation on a 256-node connection machine consisting of a network of transputers. The results of our analysis indicate that parallel processing is a promising approach to on-line analysis of data from gravitational wave detectors to filter out coalescing binary signals. The algorithm described is quite general in that the kernel of the algorithm is applicable to any set of matched filters. (author). 15 refs, 4 figs

  2. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  3. Advancements to the planogram frequency–distance rebinning algorithm

    International Nuclear Information System (INIS)

    Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  4. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  5. Comparison of the image qualities of filtered back-projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction for CT venography at 80 kVp

    International Nuclear Information System (INIS)

    Kim, Jin Hyeok; Choo, Ki Seok; Moon, Tae Yong; Lee, Jun Woo; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yun, Myeong-Ja; Jeong, Dong Wook; Lim, Soo Jin

    2016-01-01

    To evaluate the subjective and objective qualities of computed tomography (CT) venography images at 80 kVp using model-based iterative reconstruction (MBIR) and to compare these with those of filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) using the same CT data sets. Forty-four patients (mean age: 56.1 ± 18.1) who underwent 80 kVp CT venography (CTV) for the evaluation of deep vein thrombosis (DVT) during 4 months were enrolled in this retrospective study. The same raw data were reconstructed using FBP, ASIR, and MBIR. Objective and subjective image analysis were performed at the inferior vena cava (IVC), femoral vein, and popliteal vein. The mean CNR of MBIR was significantly greater than those of FBP and ASIR and images reconstructed using MBIR had significantly lower objective image noise (p <.001). Subjective image quality and confidence of detecting DVT by MBIR group were significantly greater than those of FBP and ASIR (p <.005), and MBIR had the lowest score for subjective image noise (p <.001). CTV at 80 kVp with MBIR was superior to FBP and ASIR regarding subjective and objective image qualities. (orig.)

  6. GPU-based Branchless Distance-Driven Projection and Backprojection.

    Science.gov (United States)

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  7. Optimization of phononic filters via genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, M I [University of Colorado, Department of Aerospace Engineering Sciences, Boulder, Colorado 80309-0429 (United States); El-Beltagy, M A [Cairo University, Faculty of Computers and Information, 5 Dr. Ahmed Zewail Street, 12613 Giza (Egypt)

    2007-12-15

    A phononic crystal is commonly characterized by its dispersive frequency spectrum. With appropriate spatial distribution of the constituent material phases, spectral stop bands could be generated. Moreover, it is possible to control the number, the width, and the location of these bands within a frequency range of interest. This study aims at exploring the relationship between unit cell configuration and frequency spectrum characteristics. Focusing on 1D layered phononic crystals, and longitudinal wave propagation in the direction normal to the layering, the unit cell features of interest are the number of layers and the material phase and relative thickness of each layer. An evolutionary search for binary- and ternary-phase cell designs exhibiting a series of stop bands at predetermined frequencies is conducted. A specially formulated representation and set of genetic operators that break the symmetries in the problem are developed for this purpose. An array of optimal designs for a range of ratios in Young's modulus and density are obtained and the corresponding objective values (the degrees to which the resulting bands match the predetermined targets) are examined as a function of these ratios. It is shown that a rather complex filtering objective could be met with a high degree of success. Structures composed of the designed phononic crystals are excellent candidates for use in a wide range of applications including sound and vibration filtering.

  8. Optimization of phononic filters via genetic algorithms

    International Nuclear Information System (INIS)

    Hussein, M I; El-Beltagy, M A

    2007-01-01

    A phononic crystal is commonly characterized by its dispersive frequency spectrum. With appropriate spatial distribution of the constituent material phases, spectral stop bands could be generated. Moreover, it is possible to control the number, the width, and the location of these bands within a frequency range of interest. This study aims at exploring the relationship between unit cell configuration and frequency spectrum characteristics. Focusing on 1D layered phononic crystals, and longitudinal wave propagation in the direction normal to the layering, the unit cell features of interest are the number of layers and the material phase and relative thickness of each layer. An evolutionary search for binary- and ternary-phase cell designs exhibiting a series of stop bands at predetermined frequencies is conducted. A specially formulated representation and set of genetic operators that break the symmetries in the problem are developed for this purpose. An array of optimal designs for a range of ratios in Young's modulus and density are obtained and the corresponding objective values (the degrees to which the resulting bands match the predetermined targets) are examined as a function of these ratios. It is shown that a rather complex filtering objective could be met with a high degree of success. Structures composed of the designed phononic crystals are excellent candidates for use in a wide range of applications including sound and vibration filtering

  9. Power system static state estimation using Kalman filter algorithm

    Directory of Open Access Journals (Sweden)

    Saikia Anupam

    2016-01-01

    Full Text Available State estimation of power system is an important tool for operation, analysis and forecasting of electric power system. In this paper, a Kalman filter algorithm is presented for static estimation of power system state variables. IEEE 14 bus system is employed to check the accuracy of this method. Newton Raphson load flow study is first carried out on our test system and a set of data from the output of load flow program is taken as measurement input. Measurement inputs are simulated by adding Gaussian noise of zero mean. The results of Kalman estimation are compared with traditional Weight Least Square (WLS method and it is observed that Kalman filter algorithm is numerically more efficient than traditional WLS method. Estimation accuracy is also tested for presence of parametric error in the system. In addition, numerical stability of Kalman filter algorithm is tested by considering inclusion of zero mean errors in the initial estimates.

  10. Computed tomography of the cervical spine: comparison of image quality between a standard-dose and a low-dose protocol using filtered back-projection and iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Becce, Fabio [University of Lausanne, Department of Diagnostic and Interventional Radiology, Centre Hospitalier Universitaire Vaudois, Lausanne (Switzerland); Universite Catholique Louvain, Department of Radiology, Cliniques Universitaires Saint-Luc, Brussels (Belgium); Ben Salah, Yosr; Berg, Bruno C. vande; Lecouvet, Frederic E.; Omoumi, Patrick [Universite Catholique Louvain, Department of Radiology, Cliniques Universitaires Saint-Luc, Brussels (Belgium); Verdun, Francis R. [University of Lausanne, Institute of Radiation Physics, Centre Hospitalier Universitaire Vaudois, Lausanne (Switzerland); Meuli, Reto [University of Lausanne, Department of Diagnostic and Interventional Radiology, Centre Hospitalier Universitaire Vaudois, Lausanne (Switzerland)

    2013-07-15

    To compare image quality of a standard-dose (SD) and a low-dose (LD) cervical spine CT protocol using filtered back-projection (FBP) and iterative reconstruction (IR). Forty patients investigated by cervical spine CT were prospectively randomised into two groups: SD (120 kVp, 275 mAs) and LD (120 kVp, 150 mAs), both applying automatic tube current modulation. Data were reconstructed using both FBP and sinogram-affirmed IR. Image noise, signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were measured. Two radiologists independently and blindly assessed the following anatomical structures at C3-C4 and C6-C7 levels, using a four-point scale: intervertebral disc, content of neural foramina and dural sac, ligaments, soft tissues and vertebrae. They subsequently rated overall image quality using a ten-point scale. For both protocols and at each disc level, IR significantly decreased image noise and increased SNR and CNR, compared with FBP. SNR and CNR were statistically equivalent in LD-IR and SD-FBP protocols. Regardless of the dose and disc level, the qualitative scores with IR compared with FBP, and with LD-IR compared with SD-FBP, were significantly higher or not statistically different for intervertebral discs, neural foramina and ligaments, while significantly lower or not statistically different for soft tissues and vertebrae. The overall image quality scores were significantly higher with IR compared with FBP, and with LD-IR compared with SD-FBP. LD-IR cervical spine CT provides better image quality for intervertebral discs, neural foramina and ligaments, and worse image quality for soft tissues and vertebrae, compared with SD-FBP, while reducing radiation dose by approximately 40 %. (orig.)

  11. Computed tomography of the cervical spine: comparison of image quality between a standard-dose and a low-dose protocol using filtered back-projection and iterative reconstruction

    International Nuclear Information System (INIS)

    Becce, Fabio; Ben Salah, Yosr; Berg, Bruno C. vande; Lecouvet, Frederic E.; Omoumi, Patrick; Verdun, Francis R.; Meuli, Reto

    2013-01-01

    To compare image quality of a standard-dose (SD) and a low-dose (LD) cervical spine CT protocol using filtered back-projection (FBP) and iterative reconstruction (IR). Forty patients investigated by cervical spine CT were prospectively randomised into two groups: SD (120 kVp, 275 mAs) and LD (120 kVp, 150 mAs), both applying automatic tube current modulation. Data were reconstructed using both FBP and sinogram-affirmed IR. Image noise, signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were measured. Two radiologists independently and blindly assessed the following anatomical structures at C3-C4 and C6-C7 levels, using a four-point scale: intervertebral disc, content of neural foramina and dural sac, ligaments, soft tissues and vertebrae. They subsequently rated overall image quality using a ten-point scale. For both protocols and at each disc level, IR significantly decreased image noise and increased SNR and CNR, compared with FBP. SNR and CNR were statistically equivalent in LD-IR and SD-FBP protocols. Regardless of the dose and disc level, the qualitative scores with IR compared with FBP, and with LD-IR compared with SD-FBP, were significantly higher or not statistically different for intervertebral discs, neural foramina and ligaments, while significantly lower or not statistically different for soft tissues and vertebrae. The overall image quality scores were significantly higher with IR compared with FBP, and with LD-IR compared with SD-FBP. LD-IR cervical spine CT provides better image quality for intervertebral discs, neural foramina and ligaments, and worse image quality for soft tissues and vertebrae, compared with SD-FBP, while reducing radiation dose by approximately 40 %. (orig.)

  12. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  13. Filtering algorithm for radial displacement measurements of a dented pipe

    International Nuclear Information System (INIS)

    Hojjati, M.H.; Lukasiewicz, S.A.

    2008-01-01

    Experimental measurements are always affected by some noise and errors caused by inherent inaccuracies and deficiencies of the experimental techniques and measuring devices used. In some fields, such as strain calculations in a dented pipe, the results are very sensitive to the errors. This paper presents a filtering algorithm to remove noise and errors from experimental measurements of radial displacements of a dented pipe. The proposed filter eliminates the errors without harming the measured data. The filtered data can then be used to estimate membrane and bending strains. The method is very effective and easy to use and provides a helpful practical measure for inspection purposes

  14. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  15. Filtered-X Affine Projection Algorithms for Active Noise Control Using Volterra Filters

    Directory of Open Access Journals (Sweden)

    Sicuranza Giovanni L

    2004-01-01

    Full Text Available We consider the use of adaptive Volterra filters, implemented in the form of multichannel filter banks, as nonlinear active noise controllers. In particular, we discuss the derivation of filtered-X affine projection algorithms for homogeneous quadratic filters. According to the multichannel approach, it is then easy to pass from these algorithms to those of a generic Volterra filter. It is shown in the paper that the AP technique offers better convergence and tracking capabilities than the classical LMS and NLMS algorithms usually applied in nonlinear active noise controllers, with a limited complexity increase. This paper extends in two ways the content of a previous contribution published in Proc. IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03, Grado, Italy, June 2003. First of all, a general adaptation algorithm valid for any order of affine projections is presented. Secondly, a more complete set of experiments is reported. In particular, the effects of using multichannel filter banks with a reduced number of channels are investigated and relevant results are shown.

  16. A novel iris localization algorithm using correlation filtering

    Science.gov (United States)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  17. IIR Filter Modeling Using an Algorithm Inspired on Electromagnetism

    Directory of Open Access Journals (Sweden)

    Cuevas-Jiménez E.

    2013-01-01

    Full Text Available Infinite-impulse-response (IIR filtering provides a powerful approach for solving a variety of problems. However, its design represents a very complicated task, since the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a new method based on the Electromagnetism-Like Optimization Algorithm (EMO is proposed for IIR filter modeling. EMO originates from the electro-magnetism theory of physics by assuming potential solutions as electrically charged particles which spread around the solution space. The charge of each particle depends on its objective function value. This algorithm employs a collective attraction-repulsion mechanism to move the particles towards optimality. The experimental results confirm the high performance of the proposed method in solving various benchmark identification problems.

  18. An exact algorithm for optimal MAE stack filter design.

    Science.gov (United States)

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  19. Demosaicking algorithm for the Kodak-RGBW color filter array

    Science.gov (United States)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  20. Image defog algorithm based on open close filter and gradient domain recursive bilateral filter

    Science.gov (United States)

    Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen

    2017-11-01

    To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.

  1. Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter

    Directory of Open Access Journals (Sweden)

    Xiaoxi Yan

    2014-01-01

    Full Text Available As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

  2. Digital filter algorithm study and simulation of SSRF feedback system

    International Nuclear Information System (INIS)

    Han Lifeng; Yuan Renxian; Ye Kairong

    2008-01-01

    Least Square Fitting was used to design a FIR filter of the transverse feedback system for the Shanghai Synchrotron Radiation Facility (SSRF). The algorithm helped us to set appropriate gain and phase at special frequency points. This reduced the power needed for damping the beam oscillations, which was proved by System View signal simulation. And with AT (Accelerator Tool) simulation, the Gain calculation and settings to the output signals from the FIR filter were deduced. The relationship between the Kicker power and the system damping time was also given. (authors)

  3. Automatic Data Filter Customization Using a Genetic Algorithm

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  4. Portable Wideband Microwave Imaging System for Intracranial Hemorrhage Detection Using Improved Back-projection Algorithm with Model of Effective Head Permittivity

    Science.gov (United States)

    Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.

    2016-02-01

    Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.

  5. 32Still Image Compression Algorithm Based on Directional Filter Banks

    OpenAIRE

    Chunling Yang; Duanwu Cao; Li Ma

    2010-01-01

    Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...

  6. Gravitation search algorithm: Application to the optimal IIR filter design

    Directory of Open Access Journals (Sweden)

    Suman Kumar Saha

    2014-01-01

    Full Text Available This paper presents a global heuristic search optimization technique known as Gravitation Search Algorithm (GSA for the design of 8th order Infinite Impulse Response (IIR, low pass (LP, high pass (HP, band pass (BP and band stop (BS filters considering various non-linear characteristics of the filter design problems. This paper also adopts a novel fitness function in order to improve the stop band attenuation to a great extent. In GSA, law of gravity and mass interactions among different particles are adopted for handling the non-linear IIR filter design optimization problem. In this optimization technique, searcher agents are the collection of masses and interactions among them are governed by the Newtonian gravity and the laws of motion. The performances of the GSA based IIR filter designs have proven to be superior as compared to those obtained by real coded genetic algorithm (RGA and standard Particle Swarm Optimization (PSO. Extensive simulation results affirm that the proposed approach using GSA outperforms over its counterparts not only in terms of quality output, i.e., sharpness at cut-off, smaller pass band ripple, higher stop band attenuation, but also the fastest convergence speed with assured stability.

  7. SAR focusing of P-band ice sounding data using back-projection

    DEFF Research Database (Denmark)

    Kusk, Anders; Dall, Jørgen

    2010-01-01

    accommodated at the expense of computation time. The back-projection algorithm can be easily parallelized however, and can advantageously be implemented on a graphics processing unit (GPU). Results from using the back-projection algorithm on POLARIS ice sounder data from North Greenland shows that the quality...... of data is improved by the processing, and the performance of the GPU implementation allows for very fast focusing....

  8. A Novel Evolutionary Algorithm for Designing Robust Analog Filters

    Directory of Open Access Journals (Sweden)

    Shaobo Li

    2018-03-01

    Full Text Available Designing robust circuits that withstand environmental perturbation and device degradation is critical for many applications. Traditional robust circuit design is mainly done by tuning parameters to improve system robustness. However, the topological structure of a system may set a limit on the robustness achievable through parameter tuning. This paper proposes a new evolutionary algorithm for robust design that exploits the open-ended topological search capability of genetic programming (GP coupled with bond graph modeling. We applied our GP-based robust design (GPRD algorithm to evolve robust lowpass and highpass analog filters. Compared with a traditional robust design approach based on a state-of-the-art real-parameter genetic algorithm (GA, our GPRD algorithm with a fitness criterion rewarding robustness, with respect to parameter perturbations, can evolve more robust filters than what was achieved through parameter tuning alone. We also find that inappropriate GA tuning may mislead the search process and that multiple-simulation and perturbed fitness evaluation methods for evolving robustness have complementary behaviors with no absolute advantage of one over the other.

  9. Theory of affine projection algorithms for adaptive filtering

    CERN Document Server

    Ozeki, Kazuhiko

    2016-01-01

    This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important f...

  10. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    Science.gov (United States)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  11. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  12. Implicit Kalman filter algorithm for nuclear reactor analysis

    International Nuclear Information System (INIS)

    Hassberger, J.A.; Lee, J.C.

    1986-01-01

    Artificial intelligence (AI) is currently the hot topic in nuclear power plant diagnostics and control. Recently, researchers have considered the use of simulation as knowledge in which faster than real-time best-estimate simulations based on first principles are tightly coupled with AI systems for analyzing power plant transients on-line. On-line simulations can be improved through a Kalman filter, a mathematical technique for obtaining the optimal estimate of a system state given the information contained in the equations of system dynamics and measurements made on the system. Filtering can be used to systemically adjust parameters of a low-order simulation model to obtain reasonable agreement between the model and actual plant dynamics. The authors present here a general Kalman filtering algorithm that derives its information of system dynamics implicitly and naturally from the discrete time step-series of state estimates available from a simulation program. Previous research has demonstrated that models adjusted on past data can be coupled with an intelligent controller to predict the future time-course of plant transients

  13. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    Science.gov (United States)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  14. Particle filters for object tracking: enhanced algorithm and efficient implementations

    International Nuclear Information System (INIS)

    Abd El-Halym, H.A.

    2010-01-01

    Object tracking and recognition is a hot research topic. In spite of the extensive research efforts expended, the development of a robust and efficient object tracking algorithm remains unsolved due to the inherent difficulty of the tracking problem. Particle filters (PFs) were recently introduced as a powerful, post-Kalman filter, estimation tool that provides a general framework for estimation of nonlinear/ non-Gaussian dynamic systems. Particle filters were advanced for building robust object trackers capable of operation under severe conditions (small image size, noisy background, occlusions, fast object maneuvers ..etc.). The heavy computational load of the particle filter remains a major obstacle towards its wide use.In this thesis, an Excitation Particle Filter (EPF) is introduced for object tracking. A new likelihood model is proposed. It depends on multiple functions: position likelihood; gray level intensity likelihood and similarity likelihood. Also, we modified the PF as a robust estimator to overcome the well-known sample impoverishment problem of the PF. This modification is based on re-exciting the particles if their weights fall below a memorized weight value. The proposed enhanced PF is implemented in software and evaluated. Its results are compared with a single likelihood function PF tracker, Particle Swarm Optimization (PSO) tracker, a correlation tracker, as well as, an edge tracker. The experimental results demonstrated the superior performance of the proposed tracker in terms of accuracy, robustness, and occlusion compared with other methods Efficient novel hardware architectures of the Sample Important Re sample Filter (SIRF) and the EPF are implemented. Three novel hardware architectures of the SIRF for object tracking are introduced. The first architecture is a two-step sequential PF machine, where particle generation, weight calculation and normalization are carried out in parallel during the first step followed by a sequential re

  15. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    DEFF Research Database (Denmark)

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge-preserving...

  16. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    Science.gov (United States)

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  17. Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm

    Science.gov (United States)

    Wang, Zhen; Kudo, Hiroyuki

    2012-03-01

    This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.

  18. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    Science.gov (United States)

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  19. A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.

    Science.gov (United States)

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei

    2014-10-01

    Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.

  20. New Collaborative Filtering Algorithms Based on SVD++ and Differential Privacy

    Directory of Open Access Journals (Sweden)

    Zhengzheng Xian

    2017-01-01

    Full Text Available Collaborative filtering technology has been widely used in the recommender system, and its implementation is supported by the large amount of real and reliable user data from the big-data era. However, with the increase of the users’ information-security awareness, these data are reduced or the quality of the data becomes worse. Singular Value Decomposition (SVD is one of the common matrix factorization methods used in collaborative filtering, which introduces the bias information of users and items and is realized by using algebraic feature extraction. The derivative model SVD++ of SVD achieves better predictive accuracy due to the addition of implicit feedback information. Differential privacy is defined very strictly and can be proved, which has become an effective measure to solve the problem of attackers indirectly deducing the personal privacy information by using background knowledge. In this paper, differential privacy is applied to the SVD++ model through three approaches: gradient perturbation, objective-function perturbation, and output perturbation. Through theoretical derivation and experimental verification, the new algorithms proposed can better protect the privacy of the original data on the basis of ensuring the predictive accuracy. In addition, an effective scheme is given that can measure the privacy protection strength and predictive accuracy, and a reasonable range for selection of the differential privacy parameter is provided.

  1. A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements

    International Nuclear Information System (INIS)

    Yuan, Y B; Piao, W Y; Xu, J B

    2007-01-01

    The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements

  2. A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements

    Science.gov (United States)

    Yuan, Y. B.; Piao, W. Y.; Xu, J. B.

    2007-07-01

    The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements.

  3. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    Science.gov (United States)

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  4. On a New Family of Kalman Filter Algorithms for Integrated Navigation

    Science.gov (United States)

    Mahboub, V.; Saadatseresht, M.; Ardalan, A. A.

    2017-09-01

    Here we present a review on a new family of Kalman filter algorithms which recently developed for integrated navigation. In particular it is useful for vision based navigation due to the type of data. Here we mainly focus on three algorithms namely weighted Total Kalman filter (WTKF), integrated Kalman filter (IKF) and constrained integrated Kalman filter (CIKF). The common characteristic of these algorithms is that they can consider the neglected random observed quantities which may appear in the dynamic model. Moreover, our approach makes use of condition equations and straightforward variance propagation rules. The WTKF algorithm can deal with problems with arbitrary weight matrixes. Both of the observation equations and system equations can be dynamic-errors-in-variables (DEIV) models in the IKF algorithms. In some problems a quadratic constraint may exist. They can be solved by CIKF algorithm. Finally, we compare four algorithms WTKF, IKF, CIKF and EKF in numerical examples.

  5. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2005-01-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections

  6. Research and Application on Fractional-Order Darwinian PSO Based Adaptive Extended Kalman Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    Qiguang Zhu

    2014-05-01

    Full Text Available To resolve the difficulty in establishing accurate priori noise model for the extended Kalman filtering algorithm, propose the fractional-order Darwinian particle swarm optimization (PSO algorithm has been proposed and introduced into the fuzzy adaptive extended Kalman filtering algorithm. The natural selection method has been adopted to improve the standard particle swarm optimization algorithm, which enhanced the diversity of particles and avoided the premature. In addition, the fractional calculus has been used to improve the evolution speed of particles. The PSO algorithm after improved has been applied to train fuzzy adaptive extended Kalman filter and achieve the simultaneous localization and mapping. The simulation results have shown that compared with the geese particle swarm optimization training of fuzzy adaptive extended Kalman filter localization and mapping algorithm, has been greatly improved in terms of localization and mapping.

  7. Multisensor Distributed Track Fusion AlgorithmBased on Strong Tracking Filter and Feedback Integration1)

    Institute of Scientific and Technical Information of China (English)

    YANGGuo-Sheng; WENCheng-Lin; TANMin

    2004-01-01

    A new multisensor distributed track fusion algorithm is put forward based on combiningthe feedback integration with the strong tracking Kalman filter. Firstly, an effective tracking gateis constructed by taking the intersection of the tracking gates formed before and after feedback.Secondly, on the basis of the constructed effective tracking gate, probabilistic data association andstrong tracking Kalman filter are combined to form the new multisensor distributed track fusionalgorithm. At last, simulation is performed on the original algorithm and the algorithm presented.

  8. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    Directory of Open Access Journals (Sweden)

    Shaeen Kalathil

    2015-11-01

    Full Text Available This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB using canonic signed digit (CSD coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  9. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  10. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  11. Design of application specific long period waveguide grating filters using adaptive particle swarm optimization algorithms

    International Nuclear Information System (INIS)

    Semwal, Girish; Rastogi, Vipul

    2014-01-01

    We present design optimization of wavelength filters based on long period waveguide gratings (LPWGs) using the adaptive particle swarm optimization (APSO) technique. We demonstrate optimization of the LPWG parameters for single-band, wide-band and dual-band rejection filters for testing the convergence of APSO algorithms. After convergence tests on the algorithms, the optimization technique has been implemented to design more complicated application specific filters such as erbium doped fiber amplifier (EDFA) amplified spontaneous emission (ASE) flattening, erbium doped waveguide amplifier (EDWA) gain flattening and pre-defined broadband rejection filters. The technique is useful for designing and optimizing the parameters of LPWGs to achieve complicated application specific spectra. (paper)

  12. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

    Directory of Open Access Journals (Sweden)

    Xiaowei Niu

    2014-05-01

    Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.

  13. A fast method to emulate an iterative POCS image reconstruction algorithm.

    Science.gov (United States)

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  14. Impact of an inferior vena cava filter retrieval algorithm on filter retrieval rates in a cancer population.

    Science.gov (United States)

    Litwin, Robert J; Huang, Steven Y; Sabir, Sharjeel H; Hoang, Quoc B; Ahrar, Kamran; Ahrar, Judy; Tam, Alda L; Mahvash, Armeen; Ensor, Joe E; Kroll, Michael; Gupta, Sanjay

    2017-09-01

    Our primary purpose was to assess the impact of an inferior vena cava filter retrieval algorithm in a cancer population. Because cancer patients are at persistently elevated risk for development of venous thromboembolism (VTE), our secondary purpose was to assess the incidence of recurrent VTE in patients who underwent filter retrieval. Patients with malignant disease who had retrievable filters placed at a tertiary care cancer hospital from August 2010 to July 2014 were retrospectively studied. A filter retrieval algorithm was established in August 2012. Patients and referring physicians were contacted in the postintervention period when review of the medical record indicated that filter retrieval was clinically appropriate. Patients were classified into preintervention (August 2010-July 2012) and postintervention (August 2012-July 2014) study cohorts. Retrieval rates and clinical pathologic records were reviewed. Filter retrieval was attempted in 34 (17.4%) of 195 patients in the preintervention cohort and 66 (32.8%) of 201 patients in the postintervention cohort (P filter retrieval in the preintervention and postintervention cohorts was 60 days (range, 20-428 days) and 107 days (range, 9-600 days), respectively (P = .16). In the preintervention cohort, 49 of 195 (25.1%) patients were lost to follow-up compared with 24 of 201 (11.9%) patients in the postintervention cohort (P filter placement to death, when available. The overall survival for patients whose filters were retrieved was longer compared with the overall survival for patients whose filters were not retrieved (P filter retrieval, two patients (2.5%) suffered from recurrent VTE (n = 1 nonfatal pulmonary embolism; n = 1 deep venous thrombosis). Both patients were treated with anticoagulation without filter replacement. Inferior vena cava filter retrieval rates can be significantly increased in patients with malignant disease with a low rate (2.5%) of recurrent VTE after filter retrieval

  15. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  16. GPS Signal Offset Detection and Noise Strength Estimation in a Parallel Kalman Filter Algorithm

    National Research Council Canada - National Science Library

    Vanek, Barry

    1999-01-01

    .... The variance of the noise process is estimated and provided to the second algorithm, a parallel Kalman filter structure, which then adapts to changes in the real-world measurement noise strength...

  17. NONLINEAR FILTER METHOD OF GPS DYNAMIC POSITIONING BASED ON BANCROFT ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    ZHANGQin; TAOBen-zao; ZHAOChao-ying; WANGLi

    2005-01-01

    Because of the ignored items after linearization, the extended Kalman filter (EKF) becomes a form of suboptimal gradient descent algorithm. The emanative tendency exists in GPS solution when the filter equations are ill-posed. The deviation in the estimation cannot be avoided. Furthermore, the true solution may be lost in pseudorange positioning because the linearized pseudorange equations are partial solutions. To solve the above problems in GPS dynamic positioning by using EKF, a closed-form Kalman filter method called the two-stage algorithm is presented for the nonlinear algebraic solution of GPS dynamic positioning based on the global nonlinear least squares closed algorithm--Bancroft numerical algorithm of American. The method separates the spatial parts from temporal parts during processing the GPS filter problems, and solves the nonlinear GPS dynamic positioning, thus getting stable and reliable dynamic positioning solutions.

  18. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  19. New Collaborative Filtering Algorithms Based on SVD++ and Differential Privacy

    OpenAIRE

    Xian, Zhengzheng; Li, Qiliang; Li, Gai; Li, Lei

    2017-01-01

    Collaborative filtering technology has been widely used in the recommender system, and its implementation is supported by the large amount of real and reliable user data from the big-data era. However, with the increase of the users’ information-security awareness, these data are reduced or the quality of the data becomes worse. Singular Value Decomposition (SVD) is one of the common matrix factorization methods used in collaborative filtering, which introduces the bias information of users a...

  20. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    Science.gov (United States)

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  1. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  2. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    Science.gov (United States)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  3. APPLYING OF COLLABORATIVE FILTERING ALGORITHM FOR PROCESSING OF MEDICAL DATA

    Directory of Open Access Journals (Sweden)

    Карина Владимировна МЕЛЬНИК

    2015-05-01

    Full Text Available The problem of improving of effectiveness of medical facility for implementation of social project is considered. There are different approaches to solve this problem, some of which require additional funding, which is usually absent. Therefore, it was proposed to use the approach of processing and application of patients’ data from medical records. The selection of a representative sample of patients was carried out using the technique of collaborative filtering. Review of the methods of collaborative filtering is performed, which showed that there are three main groups of methods. The first group calculates various measures of similarity between the object. The second group is data mining techniques. The third group of methods is a hybrid approach. The Gower coefficient for calculation of similarity measure of medical records of patients is considered in the article. A model of risk assessment of diseases based on collaborative filtering techniques is developed.

  4. Ballistic target tracking algorithm based on improved particle filtering

    Science.gov (United States)

    Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang

    2015-10-01

    Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.

  5. A nowcasting technique based on application of the particle filter blending algorithm

    Science.gov (United States)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  6. Use of a Radon Stripping Algorithm for Retrospective Assessment of Air Filter Samples

    International Nuclear Information System (INIS)

    Hayes, Robert

    2009-01-01

    An evaluation of a large number of air sample filters was undertaken using a commercial alpha and beta spectroscopy system employing a passive implanted planar silicon (PIPS) detector. Samples were only measured after air flow through the filters had ceased. Use of a commercial radon stripping algorithm was implemented to discriminate anthropogenic alpha and beta activity on the filters from the radon progeny. When uncontaminated air filters were evaluated, the results showed that there was a time-dependent bias in both average estimates and measurement dispersion with the relative bias being small compared to the dispersion. By also measuring environmental air sample filters simultaneously with electroplated alpha and beta sources, use of the radon stripping algorithm demonstrated a number of substantial unexpected deviations. Use of the current algorithm is therefore not recommended for assay applications and so use of the PIPS detector should only be utilized for gross counting without appropriate modifications to the curve fitting algorithm. As a screening method, the radon stripping algorithm might be expected to see elevated alpha and beta activities on air sample filters (not due to radon progeny) around the 200 dpm level

  7. Optimization of internet content filtering-Combined with KNN and OCAT algorithms

    Science.gov (United States)

    Guo, Tianze; Wu, Lingjing; Liu, Jiaming

    2018-04-01

    The face of the status quo that rampant illegal content in the Internet, the result of traditional way to filter information, keyword recognition and manual screening, is getting worse. Based on this, this paper uses OCAT algorithm nested by KNN classification algorithm to construct a corpus training library that can dynamically learn and update, which can be improved on the filter corpus for constantly updated illegal content of the network, including text and pictures, and thus can better filter and investigate illegal content and its source. After that, the research direction will focus on the simplified updating of recognition and comparison algorithms and the optimization of the corpus learning ability in order to improve the efficiency of filtering, save time and resources.

  8. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    Science.gov (United States)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  9. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  10. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  11. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    Science.gov (United States)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  12. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    Science.gov (United States)

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  13. Nonlinear Filtering with IMM Algorithm for Ultra-Tight GPS/INS Integration

    Directory of Open Access Journals (Sweden)

    Dah-Jing Jwo

    2013-05-01

    Full Text Available Abstract This paper conducts a performance evaluation for the ultra-tight integration of a Global positioning system (GPS and an inertial navigation system (INS, using nonlinear filtering approaches with an interacting multiple model (IMM algorithm. An ultra-tight GPS/INS architecture involves the integration of in-phase and quadrature components from the correlator of a GPS receiver with INS data. An unscented Kalman filter (UKF, which employs a set of sigma points by deterministic sampling, avoids the error caused by linearization as in an extended Kalman filter (EKF. Based on the filter structural adaptation for describing various dynamic behaviours, the IMM nonlinear filtering provides an alternative for designing the adaptive filter in the ultra-tight GPS/INS integration. The use of IMM enables tuning of an appropriate value for the process of noise covariance so as to maintain good estimation accuracy and tracking capability. Two examples are provided to illustrate the effectiveness of the design and demonstrate the effective improvement in navigation estimation accuracy. A performance comparison among various filtering methods for ultra-tight integration of GPS and INS is also presented. The IMM based nonlinear filtering approach demonstrates the effectiveness of the algorithm for improved positioning performance.

  14. Accelerometer North Finding System Based on the Wavelet Packet De-noising Algorithm and Filtering Circuit

    Directory of Open Access Journals (Sweden)

    LU Yongle

    2014-07-01

    Full Text Available This paper demonstrates a method and system for north finding with a low-cost piezoelectricity accelerometer based on the Coriolis acceleration principle. The proposed setup is based on the choice of an accelerometer with residual noise of 35 ng•Hz-1/2. The plane of the north finding system is aligned parallel to the local level, which helps to eliminate the effect of plane error. The Coriolis acceleration caused by the earth’s rotation and the acceleration’s instantaneous velocity is much weaker than the g-sensitivity acceleration. To get a high accuracy and a shorter time for north finding system, in this paper, the Filtering Circuit and the wavelet packet de-nosing algorithm are used as the following. First, the hardware is designed as the alternating currents across by filtering circuit, so the DC will be isolated and the weak AC signal will be amplified. The DC is interfering signal generated by the earth's gravity. Then, we have used a wavelet packet to filter the signal which has been done through the filtering circuit. Finally, compare the north finding results measured by wavelet packet filtering with those measured by a low-pass filter. Wavelet filter de-noise data shows that wavelet packet filtering and wavelet filter measurement have high accuracy. Wavelet Packet filtering has stronger ability to remove burst noise and higher engineering environment adaptability than that of Wavelet filtering. Experimental results prove the effectiveness and project implementation of the accelerometer north finding method based on wavelet packet de-noising algorithm.

  15. High performance cone-beam spiral backprojection with voxel-specific weighting

    International Nuclear Information System (INIS)

    Steckmann, Sven; Knaup, Michael; Kachelriess, Marc

    2009-01-01

    Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, α) prior to adding a projection from angle α to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 10 12 floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 x 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version, the

  16. High performance cone-beam spiral backprojection with voxel-specific weighting

    Science.gov (United States)

    Steckmann, Sven; Knaup, Michael; Kachelrieß, Marc

    2009-06-01

    Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, α) prior to adding a projection from angle α to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 1012 floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 × 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version, the

  17. A hand tracking algorithm with particle filter and improved GVF snake model

    Science.gov (United States)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  18. Truncation correction for oblique filtering lines

    International Nuclear Information System (INIS)

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic

    2008-01-01

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  19. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  20. GPU-accelerated back-projection revisited. Squeezing performance by careful tuning

    Energy Technology Data Exchange (ETDEWEB)

    Papenhausen, Eric; Zheng, Ziyi; Mueller, Klaus [Stony Brook Univ., NY (United States). Computer Science Dept.

    2011-07-01

    In recent years, GPUs have become an increasingly popular tool in computed tomography (CT) reconstruction. In this paper, we discuss performance optimization techniques for a GPU-based filtered-backprojection reconstruction implementation. We explore the different optimization techniques we used and explain how those techniques affected performance. Our results show a nearly 50% increase in performance when compared to the current top ranked GPU implementation. (orig.)

  1. An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis

    Directory of Open Access Journals (Sweden)

    Qihang Wang

    2016-07-01

    Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.

  2. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    Science.gov (United States)

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.

  3. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    International Nuclear Information System (INIS)

    Du, Hongchu

    2015-01-01

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM

  4. Zero-crossing detection algorithm for arrays of optical spatial filtering velocimetry sensors

    DEFF Research Database (Denmark)

    Jakobsen, Michael Linde; Pedersen, Finn; Hanson, Steen Grüner

    2008-01-01

    This paper presents a zero-crossing detection algorithm for arrays of compact low-cost optical sensors based on spatial filtering for measuring fluctuations in angular velocity of rotating solid structures. The algorithm is applicable for signals with moderate signal-to-noise ratios, and delivers...... repeating the same measurement error for each revolution of the target, and to gain high performance measurement of angular velocity. The traditional zero-crossing detection is extended by 1) inserting an appropriate band-pass filter before the zero-crossing detection, 2) measuring time periods between zero...

  5. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    Science.gov (United States)

    2017-01-05

    vol. 74, pp. 279–295, 1999. [11] M. Fröhlich, D. Michaelis, and H. W. Strube, “SIM— simultaneous inverse filtering and matching of a glottal flow...1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to

  6. Novel algorithm by low complexity filter on retinal vessel segmentation

    Science.gov (United States)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  7. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  8. A comparison of earthquake backprojection imaging methods for dense local arrays

    Science.gov (United States)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.

  9. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    Science.gov (United States)

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  10. Array diagnostics, spatial resolution, and filtering of undesired radiation with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.

    2013-01-01

    This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...

  11. Novel Kalman filter algorithm for statistical monitoring of extensive landscapes with synoptic sensor data

    Science.gov (United States)

    Raymond L. Czaplewski

    2015-01-01

    Wall-to-wall remotely sensed data are increasingly available to monitor landscape dynamics over large geographic areas. However, statistical monitoring programs that use post-stratification cannot fully utilize those sensor data. The Kalman filter (KF) is an alternative statistical estimator. I develop a new KF algorithm that is numerically robust with large numbers of...

  12. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    Science.gov (United States)

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  13. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  14. Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms

    Science.gov (United States)

    Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin

    2013-01-01

    Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.

  15. Fast filtering algorithm based on vibration systems and neural information exchange and its application to micro motion robot

    International Nuclear Information System (INIS)

    Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu

    2014-01-01

    This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)

  16. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  17. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    Science.gov (United States)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  18. A novel pulse compression algorithm for frequency modulated active thermography using band-pass filter

    Science.gov (United States)

    Chatterjee, Krishnendu; Roy, Deboshree; Tuli, Suneet

    2017-05-01

    This paper proposes a novel pulse compression algorithm, in the context of frequency modulated thermal wave imaging. The compression filter is derived from a predefined reference pixel in a recorded video, which contains direct measurement of the excitation signal alongside the thermal image of a test piece. The filter causes all the phases of the constituent frequencies to be adjusted to nearly zero value, so that on reconstruction a pulse is obtained. Further, due to band-limited nature of the excitation, signal-to-noise ratio is improved by suppressing out-of-band noise. The result is similar to that of a pulsed thermography experiment, although the peak power is drastically reduced. The algorithm is successfully demonstrated on mild steel and carbon fibre reference samples. Objective comparisons of the proposed pulse compression algorithm with the existing techniques are presented.

  19. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  20. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  1. Noise filtering algorithm for the MFTF-B computer based control system

    International Nuclear Information System (INIS)

    Minor, E.G.

    1983-01-01

    An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions

  2. Multichannel Filtered-X Error Coded Affine Projection-Like Algorithm with Evolving Order

    Directory of Open Access Journals (Sweden)

    J. G. Avalos

    2017-01-01

    Full Text Available Affine projection (AP algorithms are commonly used to implement active noise control (ANC systems because they provide fast convergence. However, their high computational complexity can restrict their use in certain practical applications. The Error Coded Affine Projection-Like (ECAP-L algorithm has been proposed to reduce the computational burden while maintaining the speed of AP, but no version of this algorithm has been derived for active noise control, for which the adaptive structures are very different from those of other configurations. In this paper, we introduce a version of the ECAP-L for single-channel and multichannel ANC systems. The proposed algorithm is implemented using the conventional filtered-x scheme, which incurs a lower computational cost than the modified filtered-x structure, especially for multichannel systems. Furthermore, we present an evolutionary method that dynamically decreases the projection order in order to reduce the dimensions of the matrix used in the algorithm’s computations. Experimental results demonstrate that the proposed algorithm yields a convergence speed and a final residual error similar to those of AP algorithms. Moreover, it achieves meaningful computational savings, leading to simpler hardware implementation of real-time ANC applications.

  3. Multichannel algorithm for fast 3D reconstruction

    International Nuclear Information System (INIS)

    Rodet, Thomas; Grangeat, Pierre; Desbat, Laurent

    2002-01-01

    Some recent medical imaging applications such as functional imaging (PET and SPECT) or interventional imaging (CT fluoroscopy) involve increasing amounts of data. In order to reduce the image reconstruction time, we develop a new fast 3D reconstruction algorithm based on a divide and conquer approach. The proposed multichannel algorithm performs an indirect frequential subband decomposition of the image f to be reconstructed (f=Σf j ) through the filtering of the projections Rf. The subband images f j are reconstructed on a downsampled grid without information suppression. In order to reduce the computation time, we do not backproject the null filtered projections and we downsample the number of projections according to the Shannon conditions associated with the subband image. Our algorithm is based on filtering and backprojection operators. Using the same algorithms for these basic operators, our approach is three and a half times faster than a classical FBP algorithm for a 2D image 512x512 and six times faster for a 3D image 32x512x512. (author)

  4. Evolutionary Cellular Automata for Image Segmentation and Noise Filtering Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Sihem SLATNIA

    2011-01-01

    Full Text Available We use an evolutionary process to seek a specialized set of rules among a wide range of rules to be used by Cellular Automata (CA for a range of tasks,extracting edges in a given gray or colour image, noise filtering applied to black-white image. This is the best set of local rules determine the future state of CA in an asynchronous way. The Genetic Algorithm (GA is applied to search the best CA rules that can realize the best edge detection and noise filtering.

  5. Evolutionary Cellular Automata for Image Segmentation and Noise Filtering Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Okba Kazar

    2011-01-01

    Full Text Available We use an evolutionary process to seek a specialized set of rules among a wide range of rules to be used by Cellular Automata (CA for a range of tasks, extracting edges in a given gray or colour image, noise filtering applied to black-white image. This is the best set of local rules determine the future state of CA in an asynchronous way. The Genetic Algorithm (GA is applied to search the best CA rules that can realize the best edge detection and noise filtering.

  6. Study of data filtering algorithms for the KM3NeT neutrino telescope

    Energy Technology Data Exchange (ETDEWEB)

    Herold, B., E-mail: Bjoern.Herold@physik.uni-erlangen.d [Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); Seitz, T., E-mail: Thomas.Seitz@physik.uni-erlangen.d [Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); Shanidze, R., E-mail: shanidze@physik.uni-erlangen.d [Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany)

    2011-01-21

    The photomultiplier signals above a defined threshold (hits) are the main data collected from the KM3NeT neutrino telescope. The neutrino and muon events will be reconstructed from these signals. However, in the deep sea the dominant source of hits are the decays of {sup 40}K isotope and marine fauna bioluminescence. The selection of neutrino and muon events requires the implementation of fast and efficient data filtering algorithms for the reduction of accidental background event rates. A possible data filtering scheme for the KM3NeT neutrino telescope is discussed in the paper.

  7. A Nonmonotone Line Search Filter Algorithm for the System of Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    Zhong Jin

    2012-01-01

    Full Text Available We present a new iterative method based on the line search filter method with the nonmonotone strategy to solve the system of nonlinear equations. The equations are divided into two groups; some equations are treated as constraints and the others act as the objective function, and the two groups are just updated at the iterations where it is needed indeed. We employ the nonmonotone idea to the sufficient reduction conditions and filter technique which leads to a flexibility and acceptance behavior comparable to monotone methods. The new algorithm is shown to be globally convergent and numerical experiments demonstrate its effectiveness.

  8. Design of reproducible polarized and non-polarized edge filters using genetic algorithm

    International Nuclear Information System (INIS)

    Ejigu, Efrem Kebede; Lacquet, B M

    2010-01-01

    Recent advancement in optical fibre communications technology is partly due to the advancement of optical thin film technology. The advancement of optical thin film technology includes the development of new and existing optical filter design methods. The genetic algorithm is one of the new design methods that show promising results in designing a number of complicated design specifications. It is the finding of this study that the genetic algorithm design method, through its optimization capability, can give more reliable and reproducible designs of any specifications. The design method in this study optimizes the thickness of each layer to get to the best possible solution. Its capability and unavoidable limitations in designing polarized and non-polarized edge filters from absorptive and dispersive materials is well demonstrated. It is also demonstrated that polarized and non-polarized designs from the genetic algorithm are reproducible with great success. This research has accomplished the great task of formulating a computer program using the genetic algorithm in a Matlab environment for the design of a reproducible polarized and non-polarized filters of any sort from any kind of materials

  9. A non-linear algorithm for current signal filtering and peak detection in SiPM

    International Nuclear Information System (INIS)

    Putignano, M; Intermite, A; Welsch, C P

    2012-01-01

    Read-out of Silicon Photomultipliers is commonly achieved by means of charge integration, a method particularly susceptible to after-pulsing noise and not efficient for low level light signals. Current signal monitoring, characterized by easier electronic implementation and intrinsically faster than charge integration, is also more suitable for low level light signals and can potentially result in much decreased after-pulsing noise effects. However, its use is to date limited by the need of developing a suitable read-out algorithm for signal analysis and filtering able to achieve current peak detection and measurement with the needed precision and accuracy. In this paper we present an original algorithm, based on a piecewise linear-fitting approach, to filter the noise of the current signal and hence efficiently identifying and measuring current peaks. The proposed algorithm is then compared with the optimal linear filtering algorithm for time-encoded peak detection, based on a moving average routine, and assessed in terms of accuracy, precision, and peak detection efficiency, demonstrating improvements of 1÷2 orders of magnitude in all these quality factors.

  10. 3D head pose estimation and tracking using particle filtering and ICP algorithm

    KAUST Repository

    Ben Ghorbel, Mahdi; Baklouti, Malek; Couvet, Serge

    2010-01-01

    This paper addresses the issue of 3D head pose estimation and tracking. Existing approaches generally need huge database, training procedure, manual initialization or use face feature extraction manually extracted. We propose a framework for estimating the 3D head pose in its fine level and tracking it continuously across multiple Degrees of Freedom (DOF) based on ICP and particle filtering. We propose to approach the problem, using 3D computational techniques, by aligning a face model to the 3D dense estimation computed by a stereo vision method, and propose a particle filter algorithm to refine and track the posteriori estimate of the position of the face. This work comes with two contributions: the first concerns the alignment part where we propose an extended ICP algorithm using an anisotropic scale transformation. The second contribution concerns the tracking part. We propose the use of the particle filtering algorithm and propose to constrain the search space using ICP algorithm in the propagation step. The results show that the system is able to fit and track the head properly, and keeps accurate the results on new individuals without a manual adaptation or training. © Springer-Verlag Berlin Heidelberg 2010.

  11. A generic EEG artifact removal algorithm based on the multi-channel Wiener filter

    Science.gov (United States)

    Somers, Ben; Francart, Tom; Bertrand, Alexander

    2018-06-01

    Objective. The electroencephalogram (EEG) is an essential neuro-monitoring tool for both clinical and research purposes, but is susceptible to a wide variety of undesired artifacts. Removal of these artifacts is often done using blind source separation techniques, relying on a purely data-driven transformation, which may sometimes fail to sufficiently isolate artifacts in only one or a few components. Furthermore, some algorithms perform well for specific artifacts, but not for others. In this paper, we aim to develop a generic EEG artifact removal algorithm, which allows the user to annotate a few artifact segments in the EEG recordings to inform the algorithm. Approach. We propose an algorithm based on the multi-channel Wiener filter (MWF), in which the artifact covariance matrix is replaced by a low-rank approximation based on the generalized eigenvalue decomposition. The algorithm is validated using both hybrid and real EEG data, and is compared to other algorithms frequently used for artifact removal. Main results. The MWF-based algorithm successfully removes a wide variety of artifacts with better performance than current state-of-the-art methods. Significance. Current EEG artifact removal techniques often have limited applicability due to their specificity to one kind of artifact, their complexity, or simply because they are too ‘blind’. This paper demonstrates a fast, robust and generic algorithm for removal of EEG artifacts of various types, i.e. those that were annotated as unwanted by the user.

  12. An image-space parallel convolution filtering algorithm based on shadow map

    Science.gov (United States)

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  13. Adaptive Kalman filter based state of charge estimation algorithm for lithium-ion battery

    International Nuclear Information System (INIS)

    Zheng Hong; Liu Xu; Wei Min

    2015-01-01

    In order to improve the accuracy of the battery state of charge (SOC) estimation, in this paper we take a lithium-ion battery as an example to study the adaptive Kalman filter based SOC estimation algorithm. Firstly, the second-order battery system model is introduced. Meanwhile, the temperature and charge rate are introduced into the model. Then, the temperature and the charge rate are adopted to estimate the battery SOC, with the help of the parameters of an adaptive Kalman filter based estimation algorithm model. Afterwards, it is verified by the numerical simulation that in the ideal case, the accuracy of SOC estimation can be enhanced by adding two elements, namely, the temperature and charge rate. Finally, the actual road conditions are simulated with ADVISOR, and the simulation results show that the proposed method improves the accuracy of battery SOC estimation under actual road conditions. Thus, its application scope in engineering is greatly expanded. (paper)

  14. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    International Nuclear Information System (INIS)

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  15. Optimal filter design with progressive genetic algorithm for local damage detection in rolling bearings

    Science.gov (United States)

    Wodecki, Jacek; Michalak, Anna; Zimroz, Radoslaw

    2018-03-01

    Harsh industrial conditions present in underground mining cause a lot of difficulties for local damage detection in heavy-duty machinery. For vibration signals one of the most intuitive approaches of obtaining signal with expected properties, such as clearly visible informative features, is prefiltration with appropriately prepared filter. Design of such filter is very broad field of research on its own. In this paper authors propose a novel approach to dedicated optimal filter design using progressive genetic algorithm. Presented method is fully data-driven and requires no prior knowledge of the signal. It has been tested against a set of real and simulated data. Effectiveness of operation has been proven for both healthy and damaged case. Termination criterion for evolution process was developed, and diagnostic decision making feature has been proposed for final result determinance.

  16. Fault diagnosis for wind turbine planetary ring gear via a meshing resonance based filtering algorithm.

    Science.gov (United States)

    Wang, Tianyang; Chu, Fulei; Han, Qinkai

    2017-03-01

    Identifying the differences between the spectra or envelope spectra of a faulty signal and a healthy baseline signal is an efficient planetary gearbox local fault detection strategy. However, causes other than local faults can also generate the characteristic frequency of a ring gear fault; this may further affect the detection of a local fault. To address this issue, a new filtering algorithm based on the meshing resonance phenomenon is proposed. In detail, the raw signal is first decomposed into different frequency bands and levels. Then, a new meshing index and an MRgram are constructed to determine which bands belong to the meshing resonance frequency band. Furthermore, an optimal filter band is selected from this MRgram. Finally, the ring gear fault can be detected according to the envelope spectrum of the band-pass filtering result. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  18. Data Assimilation in Air Contaminant Dispersion Using a Particle Filter and Expectation-Maximization Algorithm

    Directory of Open Access Journals (Sweden)

    Rongxiao Wang

    2017-09-01

    Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.

  19. A content-boosted collaborative filtering algorithm for personalized training in interpretation of radiological imaging.

    Science.gov (United States)

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng

    2014-08-01

    Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.

  20. Application of the Trend Filtering Algorithm for Photometric Time Series Data

    Science.gov (United States)

    Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.

    2016-08-01

    Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.

  1. APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    S. Cai

    2018-04-01

    Full Text Available Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging data post-processing. Cloth simulation filtering (CSF algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM, 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  2. Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm

    Science.gov (United States)

    Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing

    2018-03-01

    As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.

  3. Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms

    Science.gov (United States)

    Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin

    2014-01-01

    This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.

  4. Evaluation of Kalman filters and genetic algorithms for delayed-neutron nondestructive assay data analyses

    International Nuclear Information System (INIS)

    Aumeier, S.E.; Forsmann, J.H.

    1998-01-01

    The ability to nondestructively determine the presence and quantity of fissile/fertile nuclei in various matrices is important in several areas of nuclear applications, including international and domestic safeguards, radioactive waste characterization, and nuclear facility operations. An analysis was performed to determine the feasibility of identifying the masses of individual fissionable isotopes from a cumulative delayed-neutron signal resulting form the neutron irradiation of several uranium and plutonium isotopes. The feasibility of two separate data-processing techniques was studied: Kalman filtering and genetic algorithms. The basis of each technique is reviewed, and the structure of the algorithms as applied to the delayed-neutron analysis problem is presented. The results of parametric studies performed using several variants of the algorithms are presented. The effect of including additional constraining information such as additional measurements and known relative isotopic concentration is discussed. The parametric studies were conducted using simulated delayed-neutron data representative of the cumulative delayed-neutron response following irradiation of a sample containing 238 U, 235 U, 239 Pu, and 240 Pu. The results show that by processing delayed-neutron data representative of two significantly different fissile/fertile fission ratios, both Kalman filters and genetic algorithms are capable of yielding reasonably accurate estimates of the mass of individual isotopes contained in a given assay sample

  5. Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2011-01-01

    Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

  6. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  7. Low-cost attitude determination system using an extended Kalman filter (EKF) algorithm

    Science.gov (United States)

    Esteves, Fernando M.; Nehmetallah, Georges; Abot, Jandro L.

    2016-05-01

    Attitude determination is one of the most important subsystems in spacecraft, satellite, or scientific balloon mission s, since it can be combined with actuators to provide rate stabilization and pointing accuracy for payloads. In this paper, a low-cost attitude determination system with a precision in the order of arc-seconds that uses low-cost commercial sensors is presented including a set of uncorrelated MEMS gyroscopes, two clinometers, and a magnetometer in a hierarchical manner. The faster and less precise sensors are updated by the slower, but more precise ones through an Extended Kalman Filter (EKF)-based data fusion algorithm. A revision of the EKF algorithm fundamentals and its implementation to the current application, are presented along with an analysis of sensors noise. Finally, the results from the data fusion algorithm implementation are discussed in detail.

  8. Investigation of Backprojection Uncertainties With M6 Earthquakes

    Science.gov (United States)

    Fan, Wenyuan; Shearer, Peter M.

    2017-10-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P wave backprojection imaging of large earthquake ruptures. These errors occur because empirically estimated time shifts based on aligning P wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-M7 earthquakes over a 10 year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross correlation of their initial P wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare backprojection images for each earthquake using its own timing corrections with those obtained using the time corrections from other earthquakes. This provides a measure of how well subevents can be resolved with backprojection of a large rupture as a function of distance from the hypocenter. Our results show that backprojection is generally very robust and that the median subevent location error is about 25 km across the entire study region (˜700 km). The backprojection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3-D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine backprojection images using aftershock calibration, at least in this region.

  9. Raman spectroscopy denoising based on smoothing filter combined with EEMD algorithm

    Science.gov (United States)

    Tian, Dayong; Lv, Xiaoyi; Mo, Jiaqing; Chen, Chen

    2018-02-01

    In the extraction of Raman spectra, the signal will be affected by a variety of background noises, and then the effective information of Raman spectra is weakened or even submerged in noises, so the spectral analysis and denoising processing is very important. The traditional ensemble empirical mode decomposition (EEMD) method is to remove the noises by removing the IMF components that mainly contain the noises. However, it will lose some details of the Raman signal. For the problem of EEMD algorithm, the denoising method of smoothing filter combined with EEMD is proposed in this paper. First, EEMD is used to decompose the Raman noise signal into several IMF components. Then, the components mainly containing noises are selected using the self-correlation function, and the smoothing filter is used to remove the noises of the components. Finally, the sum of the denoised components is added with the remaining components to obtain the final denoised signal. The experimental results show that compared with the traditional denoising algorithm, the signal-to-noise ratio (SNR), the root mean square error (RMSE) and the correlation coefficient are significantly improved by using the proposed smoothing filter combined with EEMD.

  10. Analysis of Naïve Bayes Algorithm for Email Spam Filtering across Multiple Datasets

    Science.gov (United States)

    Fitriah Rusland, Nurul; Wahid, Norfaradilla; Kasim, Shahreen; Hafit, Hanayanti

    2017-08-01

    E-mail spam continues to become a problem on the Internet. Spammed e-mail may contain many copies of the same message, commercial advertisement or other irrelevant posts like pornographic content. In previous research, different filtering techniques are used to detect these e-mails such as using Random Forest, Naïve Bayesian, Support Vector Machine (SVM) and Neutral Network. In this research, we test Naïve Bayes algorithm for e-mail spam filtering on two datasets and test its performance, i.e., Spam Data and SPAMBASE datasets [8]. The performance of the datasets is evaluated based on their accuracy, recall, precision and F-measure. Our research use WEKA tool for the evaluation of Naïve Bayes algorithm for e-mail spam filtering on both datasets. The result shows that the type of email and the number of instances of the dataset has an influence towards the performance of Naïve Bayes.

  11. An algorithm for three-dimensional imaging in the positron camera

    International Nuclear Information System (INIS)

    Chen Kun; Ma Mei; Xu Rongfen; Shen Miaohe

    1986-01-01

    A mathematical algorithm of back-projection filtered for image reconstructions using two-dimensional signals detected from parallel multiwire proportional chambers is described. The approaches of pseudo three-dimensional and full three-dimensional image reconstructions are introduced, and the available point response functions are defined as well. The designing parameters and computation procedure of the full three-dimensional method is presented

  12. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    Science.gov (United States)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  13. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    Science.gov (United States)

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  14. Research on the Filtering Algorithm in Speed and Position Detection of Maglev Trains

    Directory of Open Access Journals (Sweden)

    Chunhui Dai

    2011-07-01

    Full Text Available This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train’s structure, the permanent magnet electrodynamic suspension (EDS train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.

  15. Research on the filtering algorithm in speed and position detection of maglev trains.

    Science.gov (United States)

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train's structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.

  16. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    Science.gov (United States)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  17. An Efficient Data Fingerprint Query Algorithm Based on Two-Leveled Bloom Filter

    OpenAIRE

    Bin Zhou; Rongbo Zhu; Ying Zhang; Linhui Cheng

    2013-01-01

    The function of the comparing fingerprints algorithm was to judge whether a new partitioned data chunk was in a storage system a decade ago.  At present, in the most de-duplication backup system the fingerprints of the big data chunks are huge and cannot be stored in the memory completely. The performance of the system is unavoidably retarded by data chunks accessing the storage system at the querying stage. Accordingly, a new query mechanism namely Two-stage Bloom Filter (TBF) mechanism...

  18. Design of 2-D Recursive Filters Using Self-adaptive Mutation Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Lianghong Wu

    2011-08-01

    Full Text Available This paper investigates a novel approach to the design of two-dimensional recursive digital filters using differential evolution (DE algorithm. The design task is reformulated as a constrained minimization problem and is solved by an Self-adaptive Mutation DE algorithm (SAMDE, which adopts an adaptive mutation operator that combines with the advantages of the DE/rand/1/bin strategy and the DE/best/2/bin strategy. As a result, its convergence performance is improved greatly. Numerical experiment results confirm the conclusion. The proposedSAMDE approach is effectively applied to test a numerical example and is compared with previous design methods. The computational experiments show that the SAMDE approach can obtain better results than previous design methods.

  19. Artificial Fish Swarm Algorithm-Based Particle Filter for Li-Ion Battery Life Prediction

    Directory of Open Access Journals (Sweden)

    Ye Tian

    2014-01-01

    Full Text Available An intelligent online prognostic approach is proposed for predicting the remaining useful life (RUL of lithium-ion (Li-ion batteries based on artificial fish swarm algorithm (AFSA and particle filter (PF, which is an integrated approach combining model-based method with data-driven method. The parameters, used in the empirical model which is based on the capacity fade trends of Li-ion batteries, are identified dependent on the tracking ability of PF. AFSA-PF aims to improve the performance of the basic PF. By driving the prior particles to the domain with high likelihood, AFSA-PF allows global optimization, prevents particle degeneracy, thereby improving particle distribution and increasing prediction accuracy and algorithm convergence. Data provided by NASA are used to verify this approach and compare it with basic PF and regularized PF. AFSA-PF is shown to be more accurate and precise.

  20. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.

    Science.gov (United States)

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-08-21

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.

  1. Two-dimensional restoration of single photon emission computed tomography images using the Kalman filter

    International Nuclear Information System (INIS)

    Boulfelfel, D.; Rangayyan, R.M.; Kuduvalli, G.R.; Hahn, L.J.; Kloiber, R.

    1994-01-01

    The discrete filtered backprojection (DFBP) algorithm used for the reconstruction of single photon emission computed tomography (SPECT) images affects image quality because of the operations of filtering and discretization. The discretization of the filtered backprojection process can cause the modulation transfer function (MTF) of the SPECT imaging system to be anisotropic and nonstationary, especially near the edges of the camera's field of view. The use of shift-invariant restoration techniques fails to restore large images because these techniques do not account for such variations in the MTF. This study presents the application of a two-dimensional (2-D) shift-variant Kalman filter for post-reconstruction restoration of SPECT slices. This filter was applied to SPECT images of a hollow cylinder phantom; a resolution phantom; and a large, truncated cone phantom containing two types of cold spots, a sphere, and a triangular prism. The images were acquired on an ADAC GENESYS camera. A comparison was performed between results obtained by the Kalman filter and those obtained by shift-invariant filters. Quantitative analysis of the restored images performed through measurement of root mean squared errors shows a considerable reduction in error of Kalman-filtered images over images restored using shift-invariant methods

  2. A simple method to back-project isocenter dose of radiotherapy treatments using EPID transit dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, T.B.; Cerbaro, B.Q.; Rosa, L.A.R. da, E-mail: thiago.fisimed@gmail.com, E-mail: tbsilveira@inca.gov.br [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro - RJ (Brazil)

    2017-07-01

    The aim of this work was to implement a simple algorithm to evaluate isocenter dose in a phantom using the back-projected transmitted dose acquired using an Electronic Portal Imaging Device (EPID) available in a Varian Trilogy accelerator with two nominal 6 and 10 MV photon beams. This algorithm was developed in MATLAB language, to calibrate EPID measured dose in absolute dose, using a deconvolution process, and to incorporate all scattering and attenuation contributions due to photon interactions with phantom. Modeling process was simplified by using empirical curve adjustments to describe the contribution of scattering and attenuation effects. The implemented algorithm and method were validated employing 19 patient treatment plans with 104 clinical irradiation fields projected on the phantom used. Results for EPID absolute dose calibration by deconvolution have showed percent deviations lower than 1%. Final method validation presented average percent deviations between isocenter doses calculated by back-projection and isocenter doses determined with ionization chamber of 1,86% (SD of 1,00%) and -0,94% (SD of 0,61%) for 6 and 10 MV, respectively. Normalized field by field analysis showed deviations smaller than 2% for 89% of all data for 6 MV beams and 94% for 10 MV beams. It was concluded that the proposed algorithm possesses sufficient accuracy to be used for in vivo dosimetry, being sensitive to detect dose delivery errors bigger than 3-4% for conformal and intensity modulated radiation therapy techniques. (author)

  3. A matched-filter algorithm to detect amperometric spikes resulting from quantal secretion.

    Science.gov (United States)

    Balaji Ramachandran, Supriya; Gillis, Kevin D

    2018-01-01

    Electrochemical microelectrodes located immediately adjacent to the cell surface can detect spikes of amperometric current during exocytosis as the transmitter released from a single vesicle is oxidized on the electrode surface. Automated techniques to detect spikes are needed in order to quantify the spike rate as a measure of the rate of exocytosis. We have developed a Matched Filter (MF) detection algorithm that scans the data set with a library of prototype spike templates while performing a least-squares fit to determine the amplitude and standard error. The ratio of the fit amplitude to the standard error constitutes a criterion score that is assigned for each time point and for each template. A spike is detected when the criterion score exceeds a threshold and the highest-scoring template and the time of peak score is identified. The search for the next spike commences only after the score falls below a second, lower threshold to reduce false positives. The approach was extended to detect spikes with double-exponential decays with the sum of two templates. Receiver Operating Characteristic plots (ROCs) demonstrate that the algorithm detects >95% of manually identified spikes with a false-positive rate of ∼2%. ROCs demonstrate that the MF algorithm performs better than algorithms that detect spikes based on a derivative-threshold approach. The MF approach performs well and leads into approaches to identify spike parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Validation of Kalman Filter alignment algorithm with cosmic-ray data using a CMS silicon strip tracker endcap

    CERN Document Server

    Sprenger, D; Adolphi, R; Brauer, R; Feld, L; Klein, K; Ostaptchuk, A; Schael, S; Wittmer, B

    2010-01-01

    A Kalman Filter alignment algorithm has been applied to cosmic-ray data. We discuss the alignment algorithm and an experiment-independent implementation including outlier rejection and treatment of weakly determined parameters. Using this implementation, the algorithm has been applied to data recorded with one CMS silicon tracker endcap. Results are compared to both photogrammetry measurements and data obtained from a dedicated hardware alignment system, and good agreement is observed.

  5. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression.

    Science.gov (United States)

    Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  6. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression

    Directory of Open Access Journals (Sweden)

    Xu Yu

    2018-01-01

    Full Text Available Cross-domain collaborative filtering (CDCF solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR. We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  7. Optimal IIR filter design using Gravitational Search Algorithm with Wavelet Mutation

    Directory of Open Access Journals (Sweden)

    S.K. Saha

    2015-01-01

    Full Text Available This paper presents a global heuristic search optimization technique, which is a hybridized version of the Gravitational Search Algorithm (GSA and Wavelet Mutation (WM strategy. Thus, the Gravitational Search Algorithm with Wavelet Mutation (GSAWM was adopted for the design of an 8th-order infinite impulse response (IIR filter. GSA is based on the interaction of masses situated in a small isolated world guided by the approximation of Newtonian’s laws of gravity and motion. Each mass is represented by four parameters, namely, position, active, passive and inertia mass. The position of the heaviest mass gives the near optimal solution. For better exploitation in multidimensional search spaces, the WM strategy is applied to randomly selected particles that enhance the capability of GSA for finding better near optimal solutions. An extensive simulation study of low-pass (LP, high-pass (HP, band-pass (BP and band-stop (BS IIR filters unleashes the potential of GSAWM in achieving better cut-off frequency sharpness, smaller pass band and stop band ripples, smaller transition width and higher stop band attenuation with assured stability.

  8. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Cervantes-Sanchez

    2016-01-01

    Full Text Available This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA in X-ray angiograms. Since the single-scale Gabor filters (SSG are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (Az under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with Az=0.9502 over a training set of 40 images and Az=0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms.

  9. A One ppm NDIR Methane Gas Sensor with Single Frequency Filter Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Binqing Jiang

    2012-09-01

    Full Text Available A non-dispersive infrared (NDIR methane gas sensor prototype has achieved a minimum detection limit of 1 parts per million by volume (ppm. The central idea of the design of the sensor is to decrease the detection limit by increasing the signal to noise ratio (SNR of the system. In order to decrease the noise level, a single frequency filter algorithm based on fast Fourier transform (FFT is adopted for signal processing. Through simulation and experiment, it is found that the full width at half maximum (FWHM of the filter narrows with the extension of sampling period and the increase of lamp modulation frequency, and at some optimum sampling period and modulation frequency, the filtered signal maintains a noise to signal ratio of below 1/10,000. The sensor prototype provides the key techniques for a hand-held methane detector that has a low cost and a high resolution. Such a detector may facilitate the detection of leakage of city natural gas pipelines buried underground, the monitoring of landfill gas, the monitoring of air quality and so on.

  10. Prognostics 101: A tutorial for particle filter-based prognostics algorithm using Matlab

    International Nuclear Information System (INIS)

    An, Dawn; Choi, Joo-Ho; Kim, Nam Ho

    2013-01-01

    This paper presents a Matlab-based tutorial for model-based prognostics, which combines a physical model with observed data to identify model parameters, from which the remaining useful life (RUL) can be predicted. Among many model-based prognostics algorithms, the particle filter is used in this tutorial for parameter estimation of damage or a degradation model. The tutorial is presented using a Matlab script with 62 lines, including detailed explanations. As examples, a battery degradation model and a crack growth model are used to explain the updating process of model parameters, damage progression, and RUL prediction. In order to illustrate the results, the RUL at an arbitrary cycle are predicted in the form of distribution along with the median and 90% prediction interval. This tutorial will be helpful for the beginners in prognostics to understand and use the prognostics method, and we hope it provides a standard of particle filter based prognostics. -- Highlights: ► Matlab-based tutorial for model-based prognostics is presented. ► A battery degradation model and a crack growth model are used as examples. ► The RUL at an arbitrary cycle are predicted using the particle filter

  11. The mathematics of some tomography algorithms used at JET

    Energy Technology Data Exchange (ETDEWEB)

    Ingesson, L

    2000-03-01

    Mathematical details are given of various tomographic reconstruction algorithms that are in use at JET. These algorithms include constrained optimization (CO) with local basis functions, the Cormack method, methods with natural basis functions and the iterative projection-space reconstruction method. Topics discussed include: derivation of the matrix equation for constrained optimization, variable grid size, basis functions, line integrals, derivative matrices, smoothness matrices, analytical expression of the CO solution, sparse matrix storage, projection-space coordinates, the Cormack method in elliptical coordinates, interpolative generalized natural basis functions and some details of the implementation of the filtered backprojection method. (author)

  12. Distance-driven projection and backprojection in three dimensions

    International Nuclear Information System (INIS)

    De Man, Bruno; Basu, Samit

    2004-01-01

    Projection and backprojection are operations that arise frequently in tomographic imaging. Recently, we proposed a new method for projection and backprojection, which we call distance-driven, and that offers low arithmetic cost and a highly sequential memory access pattern. Furthermore, distance-driven projection and backprojection avoid several artefact-inducing approximations characteristic of some other methods. We have previously demonstrated the application of this method to parallel and fan beam geometries. In this paper, we extend the distance-driven framework to three dimensions and demonstrate its application to cone beam reconstruction. We also present experimental results to demonstrate the computational performance, the artefact characteristics and the noise-resolution characteristics of the distance-driven method in three dimensions

  13. Investigation of Back-Projection Uncertainties with M6 Earthquakes

    Science.gov (United States)

    Fan, W.; Shearer, P. M.

    2017-12-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P-wave back-projection imaging of large earthquake ruptures. These errors occur because empirically-estimated time shifts based on aligning P-wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-7 earthquakes over a ten-year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross-correlation of their initial P-wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare back-projection images for each earthquake using its own timing corrections with those obtained using the time corrections for other earthquakes. This provides a measure of how well sub-events can be resolved with back-projection of a large rupture as a function of distance from the hypocenter. Our results show that back-projection is generally very robust and that sub-event location errors average about 20 km across the entire study region ( 700 km). The back-projection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine back-projection images using aftershock calibration, at least in this region.

  14. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    Science.gov (United States)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  15. Time scale algorithm: Definition of ensemble time and possible uses of the Kalman filter

    Science.gov (United States)

    Tavella, Patrizia; Thomas, Claudine

    1990-01-01

    The comparative study of two time scale algorithms, devised to satisfy different but related requirements, is presented. They are ALGOS(BIPM), producing the international reference TAI at the Bureau International des Poids et Mesures, and AT1(NIST), generating the real-time time scale AT1 at the National Institute of Standards and Technology. In each case, the time scale is a weighted average of clock readings, but the weight determination and the frequency prediction are different because they are adapted to different purposes. The possibility of using a mathematical tool, such as the Kalman filter, together with the definition of the time scale as a weighted average, is also analyzed. Results obtained by simulation are presented.

  16. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  17. A filtering approach to image reconstruction in 3D SPECT

    International Nuclear Information System (INIS)

    Bronnikov, Andrei V.

    2000-01-01

    We present a new approach to three-dimensional (3D) image reconstruction using analytical inversion of the exponential divergent beam transform, which can serve as a mathematical model for cone-beam 3D SPECT imaging. We apply a circular cone-beam scan and assume constant attenuation inside a convex area with a known boundary, which is satisfactory in brain imaging. The reconstruction problem is reduced to an image restoration problem characterized by a shift-variant point spread function which is given analytically. The method requires two computation steps: backprojection and filtering. The modulation transfer function (MTF) of the filter is derived by means of an original methodology using the 2D Laplace transform. The filter is implemented in the frequency domain and requires 2D Fourier transform of transverse slices. In order to obtain a shift-invariant cone-beam projection-backprojection operator we resort to an approximation, assuming that the collimator has a relatively large focal length. Nevertheless, numerical experiments demonstrate surprisingly good results for detectors with relatively short focal lengths. The use of a wavelet-based filtering algorithm greatly improves the stability to Poisson noise. (author)

  18. Design of low complexity sharp MDFT filter banks with perfect reconstruction using hybrid harmony-gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    V. Sakthivel

    2015-12-01

    Full Text Available The design of low complexity sharp transition width Modified Discrete Fourier Transform (MDFT filter bank with perfect reconstruction (PR is proposed in this work. The current trends in technology require high data rates and speedy processing along with reduced power consumption, implementation complexity and chip area. Filters with sharp transition width are required for various applications in wireless communication. Frequency response masking (FRM technique is used to reduce the implementation complexity of sharp MDFT filter banks with PR. Further, to reduce the implementation complexity, the continuous coefficients of the filters in the MDFT filter banks are represented in discrete space using canonic signed digit (CSD. The multipliers in the filters are replaced by shifters and adders. The number of non-zero bits is reduced in the conversion process to minimize the number of adders and shifters required for the filter implementation. Hence the performances of the MDFT filter bank with PR may degrade. In this work, the performances of the MDFT filter banks with PR are improved using a hybrid Harmony-Gravitational search algorithm.

  19. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    Science.gov (United States)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  20. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    Science.gov (United States)

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  1. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    Science.gov (United States)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  2. Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter

    Directory of Open Access Journals (Sweden)

    Edson A. Mitishita

    2005-12-01

    Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.

  3. Multi-example feature-constrained back-projection method for image super-resolution

    Institute of Scientific and Technical Information of China (English)

    Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li

    2017-01-01

    Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.

  4. Research on the Random Shock Vibration Test Based on the Filter-X LMS Adaptive Inverse Control Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2016-01-01

    Full Text Available The related theory and algorithm of adaptive inverse control were presented through the research which pointed out the adaptive inverse control strategy could effectively eliminate the noise influence on the system control. Proposed using a frequency domain filter-X LMS adaptive inverse control algorithm, and the control algorithm was applied to the two-exciter hydraulic vibration test system of random shock vibration control process and summarized the process of the adaptive inverse control strategies in the realization of the random shock vibration test. The self-closed-loop and field test show that using the frequency-domain filter-X LMS adaptive inverse control algorithm can realize high precision control of random shock vibration test.

  5. Cone-beam and fan-beam image reconstruction algorithms based on spherical and circular harmonics

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Gullberg, Grant T

    2004-01-01

    A cone-beam image reconstruction algorithm using spherical harmonic expansions is proposed. The reconstruction algorithm is in the form of a summation of inner products of two discrete arrays of spherical harmonic expansion coefficients at each cone-beam point of acquisition. This form is different from the common filtered backprojection algorithm and the direct Fourier reconstruction algorithm. There is no re-sampling of the data, and spherical harmonic expansions are used instead of Fourier expansions. As a special case, a new fan-beam image reconstruction algorithm is also derived in terms of a circular harmonic expansion. Computer simulation results for both cone-beam and fan-beam algorithms are presented for circular planar orbit acquisitions. The algorithms give accurate reconstructions; however, the implementation of the cone-beam reconstruction algorithm is computationally intensive. A relatively efficient algorithm is proposed for reconstructing the central slice of the image when a circular scanning orbit is used

  6. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander

    2017-02-07

    Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.

  7. An Algorithm Approach for the Analysis of Urban Land-Use/Cover: Logic Filters

    Directory of Open Access Journals (Sweden)

    Şinasi Kaya

    2014-11-01

    Full Text Available Accurate classification of land-use/cover based on remotely sensed data is important for interpreters who analyze time or event-based change on certain areas. Any method that has user flexibility on area selection provides great simplicity during analysis, since the analyzer may need to work on a specific area of interest instead of dealing with the entire remotely sensed data. The objectives of the paper are to develop an automation algorithm using Matlab & Simulink on user selected areas, to filter V-I-S (Vegetation, Impervious, Soil components using the algorithm, to analyze the components according to upper and lower threshold values based on each band histogram, and finally to obtain land-use/cover map combining the V-I-S components. LANDSAT 5TM satellite data covering Istanbul and Izmit regions are utilized, and 4, 3, 2 (RGB band combination is selected to fulfill the aims of the study. These referred bands are normalized, and V-I-S components of each band are determined. This methodology that uses Matlab & Simulink program is equally successful like the unsupervised and supervised methods. Practices with these methods that lead to qualitative and quantitative assessments of selected urban areas will further provide important spatial information and data especially to the urban planners and decision-makers.

  8. Unscented Kalman Filter Algorithm for WiFi-PDR Integrated Indoor Positioning

    Directory of Open Access Journals (Sweden)

    CHEN GuoLiang

    2015-12-01

    Full Text Available Indoor positioning still faces lots of fundamental technical problems although it has been widely applied. A novel indoor positioning technology by using the smart phone with the assisting of the widely available and economically signals of WiFi is proposed. It also includes the principles and characteristics in indoor positioning. Firstly, improve the system's accuracy by fusing the WiFi fingerprinting positioning and PDR (ped estrian dead reckoning positioning with UKF (unscented Kalman filter. Secondly, improve the real-time performance by clustering the WiFi fingerprinting with k-means clustering algorithm. An investigation test was conducted at the indoor environment to learn about its performance on a HUAWEI P6-U06 smart phone. The result shows that compared to the pattern-matching system without clustering, an average reduction of 51% in the time cost can be obtained without degrading the positioning accuracy. When the state of personnel is walking, the average positioning error of WiFi is 7.76 m, the average positioning error of PDR is 4.57 m. After UKF fusing, the system's average positioning error is down to 1.24 m. It shows that the algorithm greatly improves the system's real-time and positioning accuracy.

  9. Prospective implementation of an algorithm for bedside intravascular ultrasound-guided filter placement in critically ill patients.

    Science.gov (United States)

    Killingsworth, Christopher D; Taylor, Steven M; Patterson, Mark A; Weinberg, Jordan A; McGwin, Gerald; Melton, Sherry M; Reiff, Donald A; Kerby, Jeffrey D; Rue, Loring W; Jordan, William D; Passman, Marc A

    2010-05-01

    Although contrast venography is the standard imaging method for inferior vena cava (IVC) filter insertion, intravascular ultrasound (IVUS) imaging is a safe and effective option that allows for bedside filter placement and is especially advantageous for immobilized critically ill patients by limiting resource use, risk of transportation, and cost. This study reviewed the effectiveness of a prospectively implemented algorithm for IVUS-guided IVC filter placement in this high-risk population. Current evidence-based guidelines were used to create a clinical decision algorithm for IVUS-guided IVC filter placement in critically ill patients. After a defined lead-in phase to allow dissemination of techniques, the algorithm was prospectively implemented on January 1, 2008. Data were collected for 1 year using accepted reporting standards and a quality assurance review performed based on intent-to-treat at 6, 12, and 18 months. As defined in the prospectively implemented algorithm, 109 patients met criteria for IVUS-directed bedside IVC filter placement. Technical feasibility was 98.1%. Only 2 patients had inadequate IVUS visualization for bedside filter placement and required subsequent placement in the endovascular suite. Technical success, defined as proper deployment in an infrarenal position, was achieved in 104 of the remaining 107 patients (97.2%). The filter was permanent in 21 (19.6%) and retrievable in 86 (80.3%). The single-puncture technique was used in 101 (94.4%), with additional dual access required in 6 (5.6%). Periprocedural complications were rare but included malpositioning requiring retrieval and repositioning in three patients, filter tilt >/=15 degrees in two, and arteriovenous fistula in one. The 30-day mortality rate for the bedside group was 5.5%, with no filter-related deaths. Successful placement of IVC filters using IVUS-guided imaging at the bedside in critically ill patients can be established through an evidence-based prospectively

  10. Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms

    Science.gov (United States)

    Booker, Queen Esther

    2009-01-01

    An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…

  11. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    Science.gov (United States)

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  12. A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation

    Science.gov (United States)

    Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao

    2015-03-01

    Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.

  13. A fully three-dimensional reconstruction algorithm with the nonstationary filter for improved single-orbit cone beam SPECT

    International Nuclear Information System (INIS)

    Cao, Z.J.; Tsui, B.M.

    1993-01-01

    Conventional single-orbit cone beam tomography presents special problems. They include incomplete sampling and inadequate three-dimensional (3D) reconstruction algorithm. The commonly used Feldkamp reconstruction algorithm simply extends the two-dimensional (2D) fan beam algorithm to 3D cone beam geometry. A truly 3D reconstruction formulation has been derived for the single-orbit cone beam SPECT based on the 3D Fourier slice theorem. In the formulation, a nonstationary filter which depends on the distance from the central plane of the cone beam was derived. The filter is applied to the 2D projection data in directions along and normal to the axis-of-rotation. The 3D reconstruction algorithm with the nonstationary filter was evaluated using both computer simulation and experimental measurements. Significant improvement in image quality was demonstrated in terms of decreased artifacts and distortions in cone beam reconstructed images. However, compared with the Feldkamp algorithm, a five-fold increase in processing time is required. Further improvement in image quality needs complete sampling in frequency space

  14. Estimation Algorithm of Machine Operational Intention by Bayes Filtering with Self-Organizing Map

    Directory of Open Access Journals (Sweden)

    Satoshi Suzuki

    2012-01-01

    Full Text Available We present an intention estimator algorithm that can deal with dynamic change of the environment in a man-machine system and will be able to be utilized for an autarkical human-assisting system. In the algorithm, state transition relation of intentions is formed using a self-organizing map (SOM from the measured data of the operation and environmental variables with the reference intention sequence. The operational intention modes are identified by stochastic computation using a Bayesian particle filter with the trained SOM. This method enables to omit the troublesome process to specify types of information which should be used to build the estimator. Applying the proposed method to the remote operation task, the estimator's behavior was analyzed, the pros and cons of the method were investigated, and ways for the improvement were discussed. As a result, it was confirmed that the estimator can identify the intention modes at 44–94 percent concordance ratios against normal intention modes whose periods can be found by about 70 percent of members of human analysts. On the other hand, it was found that human analysts' discrimination which was used as canonical data for validation differed depending on difference of intention modes. Specifically, an investigation of intentions pattern discriminated by eight analysts showed that the estimator could not identify the same modes that human analysts could not discriminate. And, in the analysis of the multiple different intentions, it was found that the estimator could identify the same type of intention modes to human-discriminated ones as well as 62–73 percent when the first and second dominant intention modes were considered.

  15. A Framework of Finite-model Kalman Filter with Case Study: MVDP-FMKF Algorithm%A Framework of Finite-model Kalman Filter with Case Study:MVDP-FMKF Algorithm

    Institute of Scientific and Technical Information of China (English)

    FENG Bo; MA Hong-Bin; FU Meng-Yin; WANG Shun-Ting

    2013-01-01

    Kalman filtering techniques have been widely used in many applications,however,standard Kalman filters for linear Gaussian systems usually cannot work well or even diverge in the presence of large model uncertainty.In practical applications,it is expensive to have large number of high-cost experiments or even impossible to obtain an exact system model.Motivated by our previous pioneering work on finite-model adaptive control,a framework of finite-model Kalman filtering is introduced in this paper.This framework presumes that large model uncertainty may be restricted by a finite set of known models which can be very different from each other.Moreover,the number of known models in the set can be flexibly chosen so that the uncertain model may always be approximated by one of the known models,in other words,the large model uncertainty is "covered" by the "convex hull" of the known models.Within the presented framework according to the idea of adaptive switching via the minimizing vector distance principle,a simple finite-model Kalman filter,MVDP-FMKF,is mathematically formulated and illustrated by extensive simulations.An experiment of MEMS gyroscope drift has verified the effectiveness of the proposed algorithm,indicating that the mechanism of finite-model Kalman filter is useful and efficient in practical applications of Kalman filters,especially in inertial navigation systems.

  16. Stability Analysis of a Matrix Converter Drive: Effects of Input Filter Type and the Voltage Fed to the Modulation Algorithm

    Directory of Open Access Journals (Sweden)

    M. Hosseini Abardeh

    2015-03-01

    Full Text Available The matrix converter instability can cause a substantial distortion in the input currents and voltages which leads to the malfunction of the converter. This paper deals with the effects of input filter type, grid inductance, voltage fed to the modulation algorithm and the synchronous rotating digital filter time constant on the stability and performance of the matrix converter. The studies are carried out using eigenvalues of the linearized system and simulations. Two most common schemes for the input filter (LC and RLC are analyzed. It is shown that by a proper choice of voltage input to the modulation algorithm, structure of the input filter and its parameters, the need for the digital filter for ensuring the stability can be resolved. Moreover, a detailed model of the system considering the switching effects is simulated and the results are used to validate the analytical outcomes. The agreement between simulation and analytical results implies that the system performance is not deteriorated by neglecting the nonlinear switching behavior of the converter. Hence, the eigenvalue analysis of the linearized system can be a proper indicator of the system stability.

  17. Filtered Backprojection using Algebraic Filters; Application to Biomedical Micro-CT Data

    NARCIS (Netherlands)

    L. Plantagie (Linda); W. van Aarle (Wim); J. Sijbers (Jan); K.J. Batenburg (Joost)

    2015-01-01

    htmlabstractFor computerized tomography (CT) imaging in (bio)medical applications, radiation dose reduction is extremely important. This can be achieved simply by reducing the number of projection images taken. In order to obtain accurate reconstructions from few projections, however, common

  18. A new parallel algorithm and its simulation on hypercube simulator for low pass digital image filtering using systolic array

    International Nuclear Information System (INIS)

    Al-Hallaq, A.; Amin, S.

    1998-01-01

    This paper introduces a new parallel algorithm and its simulation on a hypercube simulator for the low pass digital image filtering using a systolic array. This new algorithm is faster than the old one (Amin, 1988). This is due to the the fact that the old algorithm carries out the addition operations in a sequential mode. But in our new design these addition operations are divided into tow groups, which can be performed in parallel. One group will be performed on one half of the systolic array and the other on the second half, that is, by folding. This parallelism reduces the time required for the whole process by almost quarter the time of the old algorithm.(authors). 18 refs., 3 figs

  19. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    Science.gov (United States)

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  20. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications

    Science.gov (United States)

    2016-06-01

    Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications Executive Summary The Global Positioning system ( GPS ) is the primary...software that may need to be developed for performance prediction of current or future systems that incorporate GPS . The ultimate aim is to help inform...Defence Science and Technology Organisation in 1986. His major areas of work were adaptive tracking , sig- nal processing, and radar systems engineering

  1. Impact of the genfit2 Kalman-filter-based algorithms on physics simulations performed with PandaRoot

    Energy Technology Data Exchange (ETDEWEB)

    Prencipe, Elisabetta; Ritman, James [Forschungszentrum Juelich, IKP1, Juelich (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    PANDA is a planned experiment at FAIR (Darmstadt) with a cooled antiproton beam in a range [1.5;15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2 T) and a dipole field (B=2 Tm) in an experiment with a fixed target topology, in that energy regime. The tracking system of PANDA involves the presence of a high performance silicon vertex detector, a GEM detector, a Straw-Tubes central tracker, a forward tracking system, and a luminosity monitor. The offline tracking algorithm is developed within the PandaRoot framework, which is a part of the FAIRRoot project. The algorithm here presented is based on a tool containing the Kalman Filter equations and a deterministic annealing filter (genfit). The Kalman-Filter-based algorithms have a wide range of applications; among those in particle physics they can perform extrapolations of track parameters and covariance matrices. The impact on physics simulations performed for the PANDA experiment is shown for the first time, with the PandaRoot framework: improvement is shown for those channels where a good low momentum tracking is required (p{sub T}<400 MeV/c), i.e. D mesons and Λ reconstruction, of about a factor 2.

  2. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    Science.gov (United States)

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  3. Information Recovery Algorithm for Ground Objects in Thin Cloud Images by Fusing Guide Filter and Transfer Learning

    Directory of Open Access Journals (Sweden)

    HU Gensheng

    2018-03-01

    Full Text Available Ground object information of remote sensing images covered with thin clouds is obscure. An information recovery algorithm for ground objects in thin cloud images is proposed by fusing guide filter and transfer learning. Firstly, multi-resolution decomposition of thin cloud target images and cloud-free guidance images is performed by using multi-directional nonsubsampled dual-tree complex wavelet transform. Then the decomposed low frequency subbands are processed by using support vector guided filter and transfer learning respectively. The decomposed high frequency subbands are enhanced by using modified Laine enhancement function. The low frequency subbands output by guided filter and those predicted by transfer learning model are fused by the method of selection and weighting based on regional energy. Finally, the enhanced high frequency subbands and the fused low frequency subbands are reconstructed by using inverse multi-directional nonsubsampled dual-tree complex wavelet transform to obtain the ground object information recovery images. Experimental results of Landsat-8 OLI multispectral images show that, support vector guided filter can effectively preserve the detail information of the target images, domain adaptive transfer learning can effectively extend the range of available multi-source and multi-temporal remote sensing images, and good effects for ground object information recover are obtained by fusing guide filter and transfer learning to remove thin cloud on the remote sensing images.

  4. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    Science.gov (United States)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  5. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  6. Attitude Determination Method by Fusing Single Antenna GPS and Low Cost MEMS Sensors Using Intelligent Kalman Filter Algorithm

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available For meeting the demands of cost and size for micronavigation system, a combined attitude determination approach with sensor fusion algorithm and intelligent Kalman filter (IKF on low cost Micro-Electro-Mechanical System (MEMS gyroscope, accelerometer, and magnetometer and single antenna Global Positioning System (GPS is proposed. The effective calibration method is performed to compensate the effect of errors in low cost MEMS Inertial Measurement Unit (IMU. The different control strategies fusing the MEMS multisensors are designed. The yaw angle fusing gyroscope, accelerometer, and magnetometer algorithm is estimated accurately under GPS failure and unavailable sideslip situations. For resolving robust control and characters of the uncertain noise statistics influence, the high gain scale of IKF is adjusted by fuzzy controller in the transition process and steady state to achieve faster convergence and accurate estimation. The experiments comparing different MEMS sensors and fusion algorithms are implemented to verify the validity of the proposed approach.

  7. Exergy Analysis and Optimization of an Alpha Type Stirling Engine Using the Implicit Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    James A. Wills

    2017-12-01

    Full Text Available This paper presents the exergy analysis and optimization of the Stirling engine, which has enormous potential for use in the renewable energy industry as it is quiet, efficient, and can operate with a variety of different heat sources and, therefore, has multi-fuel capabilities. This work aims to present a method that can be used by a Stirling engine designer to quickly and efficiently find near-optimal or optimal Stirling engine geometry and operating conditions. The model applies the exergy analysis methodology to the ideal-adiabatic Stirling engine model. In the past, this analysis technique has only been applied to highly idealized Stirling cycle models and this study shows its use in the realm of Stirling cycle optimization when applied to a more complex model. The implicit filtering optimization algorithm is used to optimize the engine as it quickly and efficiently computes the optimal geometry and operating frequency that gives maximum net-work output at a fixed energy input. A numerical example of a 1,000 cm3 engine is presented, where the geometry and operating frequency of the engine are optimized for four different regenerator mesh types, varying heater inlet temperature and a fixed energy input of 15 kW. The WN200 mesh is seen to perform best of the four mesh types analyzed, giving the greatest net-work output and efficiency. The optimal values of several different engine parameters are presented in the work. It is shown that the net-work output and efficiency increase with increasing heater inlet temperature. The optimal dead-volume ratio, swept volume ratio, operating frequency, and phase angle are all shown to decrease with increasing heater inlet temperature. In terms of the heat exchanger geometry, the heater and cooler tubes are seen to decrease in size and the cooler and heater effectiveness is seen to decrease with increasing heater temperature, whereas the regenerator is seen to increase in size and effectiveness. In

  8. A Kalman filter-based short baseline RTK algorithm for single-frequency combination of GPS and BDS.

    Science.gov (United States)

    Zhao, Sihao; Cui, Xiaowei; Guan, Feng; Lu, Mingquan

    2014-08-20

    The emerging Global Navigation Satellite Systems (GNSS) including the BeiDou Navigation Satellite System (BDS) offer more visible satellites for positioning users. To employ those new satellites in a real-time kinematic (RTK) algorithm to enhance positioning precision and availability, a data processing model for the dual constellation of GPS and BDS is proposed and analyzed. A Kalman filter-based algorithm is developed to estimate the float ambiguities for short baseline scenarios. The entire work process of the high-precision algorithm based on the proposed model is deeply investigated in detail. The model is validated with real GPS and BDS data recorded from one zero and two short baseline experiments. Results show that the proposed algorithm can generate fixed baseline output with the same precision level as that of either a single GPS or BDS RTK algorithm. The significantly improved fixed rate and time to first fix of the proposed method demonstrates a better availability and effectiveness on processing multi-GNSSs.

  9. A Kalman Filter-Based Short Baseline RTK Algorithm for Single-Frequency Combination of GPS and BDS

    Directory of Open Access Journals (Sweden)

    Sihao Zhao

    2014-08-01

    Full Text Available The emerging Global Navigation Satellite Systems (GNSS including the BeiDou Navigation Satellite System (BDS offer more visible satellites for positioning users. To employ those new satellites in a real-time kinematic (RTK algorithm to enhance positioning precision and availability, a data processing model for the dual constellation of GPS and BDS is proposed and analyzed. A Kalman filter-based algorithm is developed to estimate the float ambiguities for short baseline scenarios. The entire work process of the high-precision algorithm based on the proposed model is deeply investigated in detail. The model is validated with real GPS and BDS data recorded from one zero and two short baseline experiments. Results show that the proposed algorithm can generate fixed baseline output with the same precision level as that of either a single GPS or BDS RTK algorithm. The significantly improved fixed rate and time to first fix of the proposed method demonstrates a better availability and effectiveness on processing multi-GNSSs.

  10. Delay Estimator and Improved Proportionate Multi-Delay Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    E. Verteletskaya

    2012-04-01

    Full Text Available This paper pertains to speech and acoustic signal processing, and particularly to a determination of echo path delay and operation of echo cancellers. To cancel long echoes, the number of weights in a conventional adaptive filter must be large. The length of the adaptive filter will directly affect both the degree of accuracy and the convergence speed of the adaptation process. We present a new adaptive structure which is capable to deal with multiple dispersive echo paths. An adaptive filter according to the present invention includes means for storing an impulse response in a memory, the impulse response being indicative of the characteristics of a transmission line. It also includes a delay estimator for detecting ranges of samples within the impulse response having relatively large distribution of echo energy. These ranges of samples are being indicative of echoes on the transmission line. An adaptive filter has a plurality of weighted taps, each of the weighted taps having an associated tap weight value. A tap allocation/control circuit establishes the tap weight values in response to said detecting means so that only taps within the regions of relatively large distributions of echo energy are turned on. Thus, the convergence speed and the degree of estimation in the adaptation process can be improved.

  11. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  12. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander; Mikhalev, Alexander; Serdyukov, Pavel; Gusev, Gleb; Oseledets, Ivan

    2017-01-01

    preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a

  13. A new approximate algorithm for image reconstruction in cone-beam spiral CT at small cone-angles

    International Nuclear Information System (INIS)

    Schaller, S.; Flohr, T.; Steffen, P.

    1996-01-01

    This paper presents a new approximate algorithm for image reconstruction with cone-beam spiral CT data at relatively small cone-angles. Based on the algorithm of Wang et al., our method combines a special complementary interpolation with filtered backprojection. The presented algorithm has three main advantages over Wang's algorithm: (1) It overcomes the pitch limitation of Wang's algorithm. (2) It significantly improves z-resolution when suitable sampling schemes are applied. (3) It avoids the waste of applied radiation dose inherent to Wang's algorithm. Usage of the total applied dose is an important requirement in medical imaging. Our method has been implemented on a standard workstation. Reconstructions of computer-simulated data of different phantoms, assuming sampling conditions and image quality requirements typical to medical CT, show encouraging results

  14. WE-G-18A-08: Axial Cone Beam DBPF Reconstruction with Three-Dimensional Weighting and Butterfly Filtering

    Energy Technology Data Exchange (ETDEWEB)

    Tang, S; Wang, W [School of Automation, Xi' an University of Post and Telecommunication, Xi' an, Shaanxi (China); Tang, X [Emory University School of Medicine, Atlanta, GA (United States)

    2014-06-15

    Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts exist along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.

  15. WE-G-18A-08: Axial Cone Beam DBPF Reconstruction with Three-Dimensional Weighting and Butterfly Filtering

    International Nuclear Information System (INIS)

    Tang, S; Wang, W; Tang, X

    2014-01-01

    Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts exist along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan

  16. Integrated WiFi/PDR/Smartphone Using an Adaptive System Noise Extended Kalman Filter Algorithm for Indoor Localization

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-02-01

    Full Text Available Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning, which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF.

  17. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  18. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  19. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    Science.gov (United States)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  20. MATLAB algorithm to implement soil water data assimilation with the Ensemble Kalman Filter using HYDRUS.

    Science.gov (United States)

    Valdes-Abellan, Javier; Pachepsky, Yakov; Martinez, Gonzalo

    2018-01-01

    Data assimilation is becoming a promising technique in hydrologic modelling to update not only model states but also to infer model parameters, specifically to infer soil hydraulic properties in Richard-equation-based soil water models. The Ensemble Kalman Filter method is one of the most widely employed method among the different data assimilation alternatives. In this study the complete Matlab© code used to study soil data assimilation efficiency under different soil and climatic conditions is shown. The code shows the method how data assimilation through EnKF was implemented. Richards equation was solved by the used of Hydrus-1D software which was run from Matlab. •MATLAB routines are released to be used/modified without restrictions for other researchers•Data assimilation Ensemble Kalman Filter method code.•Soil water Richard equation flow solved by Hydrus-1D.

  1. Image restoration technique using median filter combined with decision tree algorithm

    International Nuclear Information System (INIS)

    Sethu, D.; Assadi, H.M.; Hasson, F.N.; Hasson, N.N.

    2007-01-01

    Images are usually corrupted during transmission principally due to interface in the channel used for transmission. Images also be impaired by the addition of various forms of noise. Salt and pepper is commonly used to impair the image. Salt and pepper noise can be caused by errors in data transmission, malfunctioning pixel elements in camera sensors, and timing errors in the digitization process. During the filtering of noisy image, important features such as edges, lines and other fine image details embedded in the image tends to blur because of filtering operation. The enhancement of noisy data, however, is a very critical process because the sharpening operation can significantly increase the noise. In this respect, contrast enhancement is often necessary in order to highlight details that have been blurred. In this proposed approach we aim to develop image processing technique that can meet this new requirement, which are high quality and high speed. Furthermore, prevent the noise accretion during the sharpening of the image details, and compare the restored images via proposed method with other kinds of filters. (author)

  2. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    Science.gov (United States)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  3. 抑制扩频系统中窄带干扰的新卡尔曼滤波算法%New Kalman Filtering Algorithm for Narrowband Interference Suppression in Spread Spectrum Systems

    Institute of Scientific and Technical Information of China (English)

    许光辉; 胡光锐

    2005-01-01

    A new Kalman filtering algorithm based on estimation of spread spectrum signal before suppression of narrowband interference (NBI) in spread spectrum systems, using the dependence of autoregressive (AR) interference, is presented compared with performance of the ACM nonlinear filtering algorithm, simulation results show that the proposed algorithm has preferable performance, there is about 5 dB SNR improvement in average.

  4. An adaptive compensation algorithm for temperature drift of micro-electro-mechanical systems gyroscopes using a strong tracking Kalman filter.

    Science.gov (United States)

    Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan

    2015-05-13

    We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.

  5. An Adaptive Compensation Algorithm for Temperature Drift of Micro-Electro-Mechanical Systems Gyroscopes Using a Strong Tracking Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yibo Feng

    2015-05-01

    Full Text Available We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF, the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to −2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.

  6. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  7. Modification of double vector control algorithm to filter out grid harmonics

    DEFF Research Database (Denmark)

    Awad, Hilmy; Blaabjerg, Frede

    2005-01-01

    terminals in the case of distorted grid voltage. Furthermore, a selective harmonic compensation strategy is applied to filter out the grid harmonics. The operation of the SSC under distorted utility conditions and voltage dips is discussed. The validity of the proposed controller is verified by experiments......, which have been carried out on a 10-kV SSC laboratory setup. Experimental results have shown the ability of the SSC to mitigate voltage dips and harmonics. It is also shown that the proposed controller has improved the transient performance of the SSC even under distorted utility conditions....

  8. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  9. Accuracy improvement of CT reconstruction using tree-structured filter bank

    International Nuclear Information System (INIS)

    Ueda, Kazuhiro; Morimoto, Hiroaki; Morikawa, Yoshitaka; Murakami, Junichi

    2009-01-01

    Accuracy improvement of 'CT reconstruction algorithm using TSFB (Tree-Structured Filter Bank)' that is high-speed CT reconstruction algorithm, was proposed. TSFB method could largely reduce the amount of computation in comparison with the CB (Convolution Backprojection) method, but it was the problem that an artifact occurred in a reconstruction image since reconstruction was performed with disregard to a signal out of the reconstruction domain in stage processing. Also the whole band filter being the component of a two-dimensional synthesis filter was IIR filter and then an artifact occurred at the end of the reconstruction image. In order to suppress these artifacts the proposed method enlarged the processing range by the TSFB method in the domain outside by the width control of the specimen line and line addition to the reconstruction domain outside. And, furthermore, to avoid increase of the amount of computation, the algorithm was proposed such as to decide the needed processing range depending on the number of steps processing with the TSFB and the degree of incline of filter, and then update the position and width of the specimen line to process the needed range. According to the simulation to realize a high-speed and highly accurate CT reconstruction in this way, the quality of the reconstruction image of the proposed method was improved in comparison with the TSFB method and got the same result with the CB method. (T. Tanaka)

  10. A Time-Domain Filtering Scheme for the Modified Root-MUSIC Algorithm

    OpenAIRE

    Yamada, Hiroyoshi; Yamaguchi, Yoshio; Sengoku, Masakazu

    1996-01-01

    A new superresolution technique is proposed for high-resolution estimation of the scattering analysis. For complicated multipath propagation environment, it is not enough to estimate only the delay-times of the signals. Some other information should be required to identify the signal path. The proposed method can estimate the frequency characteristic of each signal in addition to its delay-time. One method called modified (Root) MUSIC algorithm is known as a technique that can treat both of t...

  11. A parallel implementation of the Wuchty algorithm with additional experimental filters to more thoroughly explore RNA conformational space.

    Directory of Open Access Journals (Sweden)

    Jonathan W Stone

    Full Text Available We present new modifications to the Wuchty algorithm in order to better define and explore possible conformations for an RNA sequence. The new features, including parallelization, energy-independent lonely pair constraints, context-dependent chemical probing constraints, helix filters, and optional multibranch loops, provide useful tools for exploring the landscape of RNA folding. Chemical probing alone may not necessarily define a single unique structure. The helix filters and optional multibranch loops are global constraints on RNA structure that are an especially useful tool for generating models of encapsidated viral RNA for which cryoelectron microscopy or crystallography data may be available. The computations generate a combinatorially complete set of structures near a free energy minimum and thus provide data on the density and diversity of structures near the bottom of a folding funnel for an RNA sequence. The conformational landscapes for some RNA sequences may resemble a low, wide basin rather than a steep funnel that converges to a single structure.

  12. A novel cooperative localization algorithm using enhanced particle filter technique in maritime search and rescue wireless sensor network.

    Science.gov (United States)

    Wu, Huafeng; Mei, Xiaojun; Chen, Xinqiang; Li, Junjun; Wang, Jun; Mohapatra, Prasant

    2018-07-01

    Maritime search and rescue (MSR) play a significant role in Safety of Life at Sea (SOLAS). However, it suffers from scenarios that the measurement information is inaccurate due to wave shadow effect when utilizing wireless Sensor Network (WSN) technology in MSR. In this paper, we develop a Novel Cooperative Localization Algorithm (NCLA) in MSR by using an enhanced particle filter method to reduce measurement errors on observation model caused by wave shadow effect. First, we take into account the mobility of nodes at sea to develop a motion model-Lagrangian model. Furthermore, we introduce both state model and observation model to constitute a system model for particle filter (PF). To address the impact of the wave shadow effect on the observation model, we develop an optimal parameter derived by Kullback-Leibler divergence (KLD) to mitigate the error. After the optimal parameter is acquired, an improved likelihood function is presented. Finally, the estimated position is acquired. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  14. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  15. A Refined Self-Tuning Filter-Based Instantaneous Power Theory Algorithm for Indirect Current Controlled Three-Level Inverter-Based Shunt Active Power Filters under Non-sinusoidal Source Voltage Conditions

    Directory of Open Access Journals (Sweden)

    Yap Hoon

    2017-02-01

    Full Text Available In this paper, a refined reference current generation algorithm based on instantaneous power (pq theory is proposed, for operation of an indirect current controlled (ICC three-level neutral-point diode clamped (NPC inverter-based shunt active power filter (SAPF under non-sinusoidal source voltage conditions. SAPF is recognized as one of the most effective solutions to current harmonics due to its flexibility in dealing with various power system conditions. As for its controller, pq theory has widely been applied to generate the desired reference current due to its simple implementation features. However, the conventional dependency on self-tuning filter (STF in generating reference current has significantly limited mitigation performance of SAPF. Besides, the conventional STF-based pq theory algorithm is still considered to possess needless features which increase computational complexity. Furthermore, the conventional algorithm is mostly designed to suit operation of direct current controlled (DCC SAPF which is incapable of handling switching ripples problems, thereby leading to inefficient mitigation performance. Therefore, three main improvements are performed which include replacement of STF with mathematical-based fundamental real power identifier, removal of redundant features, and generation of sinusoidal reference current. To validate effectiveness and feasibility of the proposed algorithm, simulation work in MATLAB-Simulink and laboratory test utilizing a TMS320F28335 digital signal processor (DSP are performed. Both simulation and experimental findings demonstrate superiority of the proposed algorithm over the conventional algorithm.

  16. Image reconstruction for digital breast tomosynthesis (DBT) by using projection-angle-dependent filter functions

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Park, Chulkyu; Cho, Hyosung; Je, Uikyu; Hong, Daeki; Lee, Minsik; Cho, Heemoon; Choi, Sungil; Koo, Yangseo [Yonsei University, Wonju (Korea, Republic of)

    2014-09-15

    Digital breast tomosynthesis (DBT) is considered in clinics as a standard three-dimensional imaging modality, allowing the earlier detection of cancer. It typically acquires only 10-30 projections over a limited angle range of 15 - 60 .deg. with a stationary detector and typically uses a computationally-efficient filtered-backprojection (FBP) algorithm for image reconstruction. However, a common FBP algorithm yields poor image quality resulting from the loss of average image value and the presence of severe image artifacts due to the elimination of the dc component of the image by the ramp filter and to the incomplete data, respectively. As an alternative, iterative reconstruction methods are often used in DBT to overcome these difficulties, even though they are still computationally expensive. In this study, as a compromise, we considered a projection-angle dependent filtering method in which one-dimensional geometry-adapted filter kernels are computed with the aid of a conjugate-gradient method and are incorporated into the standard FBP framework. We implemented the proposed algorithm and performed systematic simulation works to investigate the imaging characteristics. Our results indicate that the proposed method is superior to a conventional FBP method for DBT imaging and has a comparable computational cost, while preserving good image homogeneity and edge sharpening with no serious image artifacts.

  17. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  18. A New Switching-Based Median Filtering Scheme and Algorithm for Removal of High-Density Salt and Pepper Noise in Images

    Directory of Open Access Journals (Sweden)

    Jayaraj V

    2010-01-01

    Full Text Available A new switching-based median filtering scheme for restoration of images that are highly corrupted by salt and pepper noise is proposed. An algorithm based on the scheme is developed. The new scheme introduces the concept of substitution of noisy pixels by linear prediction prior to estimation. A novel simplified linear predictor is developed for this purpose. The objective of the scheme and algorithm is the removal of high-density salt and pepper noise in images. The new algorithm shows significantly better image quality with good PSNR, reduced MSE, good edge preservation, and reduced streaking. The good performance is achieved with reduced computational complexity. A comparison of the performance is made with several existing algorithms in terms of visual and quantitative results. The performance of the proposed scheme and algorithm is demonstrated.

  19. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization.

    Science.gov (United States)

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-09-23

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone's acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  20. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization

    Directory of Open Access Journals (Sweden)

    Guoliang Chen

    2015-09-01

    Full Text Available Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  1. Very Fast Algorithms and Detection Performance of Multi-Channel and 2-D Parametric Adaptive Matched Filters for Airborne Radar

    National Research Council Canada - National Science Library

    Marple, Jr., S. L; Corbell, Phillip M; Rangaswamy, Muralidhar

    2007-01-01

    ...) detection statistics under exactly known covariance (the clairvoyant case). Improved versions of the two original multichannel PAMF algorithms, one new multichannel PAMF algorithm, and a new two-dimensional (2D) PAMF algorithm...

  2. Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging

    International Nuclear Information System (INIS)

    Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.

    2012-01-01

    Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R and D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.

  3. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2017-10-01

    Full Text Available Because of the harsh polar environment, the master strapdown inertial navigation system (SINS has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.

  4. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter.

    Science.gov (United States)

    Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin

    2017-10-23

    Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.

  5. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    International Nuclear Information System (INIS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-01-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region. (paper)

  6. X-ray differential phase-contrast tomographic reconstruction with a phase line integral retrieval filter

    International Nuclear Information System (INIS)

    Fu, Jian; Hu, Xinhua; Li, Chen

    2015-01-01

    We report an alternative reconstruction technique for x-ray differential phase-contrast computed tomography (DPC-CT). This approach is based on a new phase line integral projection retrieval filter, which is rooted in the derivative property of the Fourier transform and counteracts the differential nature of the DPC-CT projections. It first retrieves the phase line integral from the DPC-CT projections. Then the standard filtered back-projection (FBP) algorithms popular in x-ray absorption-contrast CT are directly applied to the retrieved phase line integrals to reconstruct the DPC-CT images. Compared with the conventional DPC-CT reconstruction algorithms, the proposed method removes the Hilbert imaginary filter and allows for the direct use of absorption-contrast FBP algorithms. Consequently, FBP-oriented image processing techniques and reconstruction acceleration softwares that have already been successfully used in absorption-contrast CT can be directly adopted to improve the DPC-CT image quality and speed up the reconstruction

  7. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  8. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  9. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  10. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  11. Evaluation of digital breast tomosynthesis reconstruction algorithms using synchrotron radiation in standard geometry

    International Nuclear Information System (INIS)

    Bliznakova, K.; Kolitsi, Z.; Speller, R. D.; Horrocks, J. A.; Tromba, G.; Pallikarakis, N.

    2010-01-01

    Purpose: In this article, the image quality of reconstructed volumes by four algorithms for digital tomosynthesis, applied in the case of breast, is investigated using synchrotron radiation. Methods: An angular data set of 21 images of a complex phantom with heterogeneous tissue-mimicking background was obtained using the SYRMEP beamline at ELETTRA Synchrotron Light Laboratory, Trieste, Italy. The irradiated part was reconstructed using the multiple projection algorithm (MPA) and the filtered backprojection with ramp followed by hamming windows (FBR-RH) and filtered backprojection with ramp (FBP-R). Additionally, an algorithm for reducing the noise in reconstructed planes based on noise mask subtraction from the planes of the originally reconstructed volume using MPA (MPA-NM) has been further developed. The reconstruction techniques were evaluated in terms of calculations and comparison of the contrast-to-noise ratio (CNR) and artifact spread function. Results: It was found that the MPA-NM resulted in higher CNR, comparable with the CNR of FBP-RH for high contrast details. Low contrast objects are well visualized and characterized by high CNR using the simple MPA and the MPA-NM. In addition, the image quality of the reconstructed features in terms of CNR and visual appearance as a function of the initial number of projection images and the reconstruction arc was carried out. Slices reconstructed with more input projection images result in less reconstruction artifacts and higher detail CNR, while those reconstructed from projection images acquired in reduced angular range causes pronounced streak artifacts. Conclusions: Of the reconstruction algorithms implemented, the MPA-NM and MPA are a good choice for detecting low contrast objects, while the FBP-RH, FBP-R, and MPA-NM provide high CNR and well outlined edges in case of microcalcifications.

  12. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  13. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  14. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  15. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  17. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  18. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  19. SU-F-T-545: Dosimetric and Radiobiological Evaluation of Dose Calculation Algorithms On Prostate Stereotactic Body Radiotherapy Using Conventional Flattened and Flattening-Filter-Free Beam

    International Nuclear Information System (INIS)

    Kang, S; Suh, T; Chung, J; Eom, K; Lee, J

    2016-01-01

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUs and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.

  20. SU-F-T-545: Dosimetric and Radiobiological Evaluation of Dose Calculation Algorithms On Prostate Stereotactic Body Radiotherapy Using Conventional Flattened and Flattening-Filter-Free Beam

    Energy Technology Data Exchange (ETDEWEB)

    Kang, S; Suh, T [The catholic university of Korea, Seoul (Korea, Republic of); Chung, J; Eom, K [Seoul National University Bundang Hospital (Korea, Republic of); Lee, J [Konkuk University Medical Center (Korea, Republic of)

    2016-06-15

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUs and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.

  1. A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon

    2012-07-01

    Full Text Available stream_source_info Salmon1_2012_ABSTRACT ONLY.pdf.txt stream_content_type text/plain stream_size 1654 Content-Encoding ISO-8859-1 stream_name Salmon1_2012_ABSTRACT ONLY.pdf.txt Content-Type text/plain; charset=ISO-8859...-1 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22-27 July 2012 A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images yzB.P. Salmon, yz...

  2. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    International Nuclear Information System (INIS)

    Chen, Ming; Yu, Hengyong

    2015-01-01

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units

  3. Development of an optimal automatic control law and filter algorithm for steep glideslope capture and glideslope tracking

    Science.gov (United States)

    Halyo, N.

    1976-01-01

    A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

  4. Genetic Algorithm-Based Design of the Active Damping for an LCL-Filter Three-Phase Active Rectifier

    DEFF Research Database (Denmark)

    Liserre, Marco; Aquila, Antonio Dell; Blaabjerg, Frede

    2004-01-01

    Active rectifiers/inverters are becoming used more and more often in regenerative systems and distributed power systems. Typically, the interface between the grid and rectifier is either an inductor or an LCL-filter. The use of an LCL-filter mitigates the switching ripple injected in the grid...... by a three-phase active rectifier. However, stability problems can arise in the current control loop. In order to overcome them, a damping resistor can be inserted, at the price of a reduction of efficiency. The use of active damping by means of control may seem attractive, but it is often limited by the use...

  5. Fast backprojection-based reconstruction of spectral-spatial EPR images from projections with the constant sweep of a magnetic field.

    Science.gov (United States)

    Komarov, Denis A; Hirata, Hiroshi

    2017-08-01

    In this paper, we introduce a procedure for the reconstruction of spectral-spatial EPR images using projections acquired with the constant sweep of a magnetic field. The application of a constant field-sweep and a predetermined data sampling rate simplifies the requirements for EPR imaging instrumentation and facilitates the backprojection-based reconstruction of spectral-spatial images. The proposed approach was applied to the reconstruction of a four-dimensional numerical phantom and to actual spectral-spatial EPR measurements. Image reconstruction using projections with a constant field-sweep was three times faster than the conventional approach with the application of a pseudo-angle and a scan range that depends on the applied field gradient. Spectral-spatial EPR imaging with a constant field-sweep for data acquisition only slightly reduces the signal-to-noise ratio or functional resolution of the resultant images and can be applied together with any common backprojection-based reconstruction algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. SU-E-T-339: Dosimetric Verification of Acuros XB Dose Calculation Algorithm On An Air Cavity for 6-MV Flattening Filter-Free Beam

    International Nuclear Information System (INIS)

    Kang, S; Suh, T; Chung, J

    2015-01-01

    Purpose: This study was to verify the accuracy of Acuros XB (AXB) dose calculation algorithm on an air cavity for a single radiation field using 6-MV flattening filter-free (FFF) beam. Methods: A rectangular slab phantom containing an air cavity was made for this study. The CT images of the phantom for dose calculation were scanned with and without film at measurement depths (4.5, 5.5, 6.5 and 7.5 cm). The central axis doses (CADs) and the off-axis doses (OADs) were measured by film and calculated with Analytical Anisotropic Algorithm (AAA) and AXB for field sizes ranging from 2 Χ 2 to 5 Χ 5 cm 2 of 6-MV FFF beams. Both algorithms were divided into AXB-w and AAA -w when included the film in phantom for dose calculation, and AXB-w/o and AAA-w/o in calculation without film. The calculated OADs for both algorithms were compared with the measured OADs and difference values were determined using root means squares error (RMSE) and gamma evaluation. Results: The percentage differences (%Diffs) between the measured and calculated CAD for AXB-w was most agreement than others. Compared to the %Diff with and without film, the %Diffs with film were decreased than without within both algorithms. The %Diffs for both algorithms were reduced with increasing field size and increased relative to the depth increment. RMSEs of CAD for AXB-w were within 10.32% for both inner-profile and penumbra, while the corresponding values of AAA-w appeared to 96.50%. Conclusion: This study demonstrated that the dose calculation with AXB within air cavity shows more accurate than with AAA compared to the measured dose. Furthermore, we found that the AXB-w was superior to AXB-w/o in this region when compared against the measurements

  7. An Improved Global Harmony Search Algorithm for the Identification of Nonlinear Discrete-Time Systems Based on Volterra Filter Modeling

    Directory of Open Access Journals (Sweden)

    Zongyan Li

    2016-01-01

    Full Text Available This paper describes an improved global harmony search (IGHS algorithm for identifying the nonlinear discrete-time systems based on second-order Volterra model. The IGHS is an improved version of the novel global harmony search (NGHS algorithm, and it makes two significant improvements on the NGHS. First, the genetic mutation operation is modified by combining normal distribution and Cauchy distribution, which enables the IGHS to fully explore and exploit the solution space. Second, an opposition-based learning (OBL is introduced and modified to improve the quality of harmony vectors. The IGHS algorithm is implemented on two numerical examples, and they are nonlinear discrete-time rational system and the real heat exchanger, respectively. The results of the IGHS are compared with those of the other three methods, and it has been verified to be more effective than the other three methods on solving the above two problems with different input signals and system memory sizes.

  8. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    International Nuclear Information System (INIS)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo

    2014-01-01

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  9. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  10. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)

    2017-08-15

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)

  11. Joint Estimation of the Electric Vehicle Power Battery State of Charge Based on the Least Squares Method and the Kalman Filter Algorithm

    Directory of Open Access Journals (Sweden)

    Xiangwei Guo

    2016-02-01

    Full Text Available An estimation of the power battery state of charge (SOC is related to the energy management, the battery cycle life and the use cost of electric vehicles. When a lithium-ion power battery is used in an electric vehicle, the SOC displays a very strong time-dependent nonlinearity under the influence of random factors, such as the working conditions and the environment. Hence, research on estimating the SOC of a power battery for an electric vehicle is of great theoretical significance and application value. In this paper, according to the dynamic response of the power battery terminal voltage during a discharging process, the second-order RC circuit is first used as the equivalent model of the power battery. Subsequently, on the basis of this model, the least squares method (LS with a forgetting factor and the adaptive unscented Kalman filter (AUKF algorithm are used jointly in the estimation of the power battery SOC. Simulation experiments show that the joint estimation algorithm proposed in this paper has higher precision and convergence of the initial value error than a single AUKF algorithm.

  12. Selection vector filter framework

    Science.gov (United States)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  13. An Idle-State Detection Algorithm for SSVEP-Based Brain-Computer Interfaces Using a Maximum Evoked Response Spatial Filter.

    Science.gov (United States)

    Zhang, Dan; Huang, Bisheng; Wu, Wei; Li, Siliang

    2015-11-01

    Although accurate recognition of the idle state is essential for the application of brain-computer interfaces (BCIs) in real-world situations, it remains a challenging task due to the variability of the idle state. In this study, a novel algorithm was proposed for the idle state detection in a steady-state visual evoked potential (SSVEP)-based BCI. The proposed algorithm aims to solve the idle state detection problem by constructing a better model of the control states. For feature extraction, a maximum evoked response (MER) spatial filter was developed to extract neurophysiologically plausible SSVEP responses, by finding the combination of multi-channel electroencephalogram (EEG) signals that maximized the evoked responses while suppressing the unrelated background EEGs. The extracted SSVEP responses at the frequencies of both the attended and the unattended stimuli were then used to form feature vectors and a series of binary classifiers for recognition of each control state and the idle state were constructed. EEG data from nine subjects in a three-target SSVEP BCI experiment with a variety of idle state conditions were used to evaluate the proposed algorithm. Compared to the most popular canonical correlation analysis-based algorithm and the conventional power spectrum-based algorithm, the proposed algorithm outperformed them by achieving an offline control state classification accuracy of 88.0 ± 11.1% and idle state false positive rates (FPRs) ranging from 7.4 ± 5.6% to 14.2 ± 10.1%, depending on the specific idle state conditions. Moreover, the online simulation reported BCI performance close to practical use: 22.0 ± 2.9 out of the 24 control commands were correctly recognized and the FPRs achieved as low as approximately 0.5 event/min in the idle state conditions with eye open and 0.05 event/min in the idle state condition with eye closed. These results demonstrate the potential of the proposed algorithm for implementing practical SSVEP BCI systems.

  14. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    International Nuclear Information System (INIS)

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  15. 自适应Kalman滤波算法在加速度计自标定中的应用%Application of adaptive Kalman filtering algorithm in autonomous calibration accelerometer

    Institute of Scientific and Technical Information of China (English)

    叶军; 陈坚; 石国祥

    2011-01-01

    针对自标定加速度计组合动基座试验数据中存在的数据异常问题,推导并运用自适应Kalman滤波算法剔除异常数据,通过对不同Kalman滤波算法自标定精度解算结果的均值和标准差进行比较,表明自适应Kalman滤波算法更加有效.%Aiming at the problems of abnormal data in the test data of autonomous calibration accelerometer-unit on dynamicbase,deducing and using adaptive Kalman filtering algorithm eliminates abnormal data, according the comparison of results from calibration precision by different Kalman filtering algorithm, it shows that the adaptive Kalman filtering algorithm is more effective.

  16. A Novel Grouping Method for Lithium Iron Phosphate Batteries Based on a Fractional Joint Kalman Filter and a New Modified K-Means Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyu Li

    2015-07-01

    Full Text Available This paper presents a novel grouping method for lithium iron phosphate batteries. In this method, a simplified electrochemical impedance spectroscopy (EIS model is utilized to describe the battery characteristics. Dynamic stress test (DST and fractional joint Kalman filter (FJKF are used to extract battery model parameters. In order to realize equal-number grouping of batteries, a new modified K-means clustering algorithm is proposed. Two rules are designed to equalize the numbers of elements in each group and exchange samples among groups. In this paper, the principles of battery model selection, physical meaning and identification method of model parameters, data preprocessing and equal-number clustering method for battery grouping are comprehensively described. Additionally, experiments for battery grouping and method validation are designed. This method is meaningful to application involving the grouping of fresh batteries for electric vehicles (EVs and screening of aged batteries for recycling.

  17. A data fusion Kalman filter algorithm to estimate leaf area index evolution by using Modis LAI and PROBA-V top of canopy synthesis data

    Science.gov (United States)

    Novelli, Antonio

    2016-08-01

    Leaf Area Index (LAI) is essential in ecosystem and agronomic studies, since it measures energy and gas exchanges between vegetation and atmosphere. In the last decades, LAI values have widely been estimated from passive remotely sensed data. Common approaches are based on semi-empirical/statistic techniques or on radiative transfer model inversion. Although the scientific community has been providing several LAI retrieval methods, the estimated results are often affected by noise and measurement uncertainties. The sequential data assimilation theory provides a theoretical framework to combine an imperfect model with incomplete observation data. In this document a data fusion Kalman filter algorithm is proposed in order to estimate the time evolution of LAI by combining MODIS LAI data and PROBA-V surface reflectance data. The reflectance data were linked to LAI by using the Reduced Simple Ratio index. The main working hypotheses were lacking input data necessary for climatic models and canopy reflectance models.

  18. A Multiple-Model Particle Filter Fusion Algorithm for GNSS/DR Slide Error Detection and Compensation

    Directory of Open Access Journals (Sweden)

    Rafael Toledo-Moreo

    2018-03-01

    Full Text Available Continuous accurate positioning is a key element for the deployment of many advanced driver assistance systems (ADAS and autonomous vehicle navigation. To achieve the necessary performance, global navigation satellite systems (GNSS must be combined with other technologies. A common onboard sensor-set that allows keeping the cost low, features the GNSS unit, odometry, and inertial sensors, such as a gyro. Odometry and inertial sensors compensate for GNSS flaws in many situations and, in normal conditions, their errors can be easily characterized, thus making the whole solution not only more accurate but also with more integrity. However, odometers do not behave properly when friction conditions make the tires slide. If not properly considered, the positioning perception will not be sound. This article introduces a hybridization approach that takes into consideration the sliding situations by means of a multiple model particle filter (MMPF. Tests with real datasets show the goodness of the proposal.

  19. Implementation of Kalman filter algorithm on models reduced using singular pertubation approximation method and its application to measurement of water level

    Science.gov (United States)

    Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky

    2018-03-01

    The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.

  20. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  1. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  2. External force back-projective composition and globally deformable optimization for 3-D coronary artery reconstruction

    International Nuclear Information System (INIS)

    Yang, Jian; Cong, Weijian; Fan, Jingfan; Liu, Yue; Wang, Yongtian; Chen, Yang

    2014-01-01

    The clinical value of the 3D reconstruction of a coronary artery is important for the diagnosis and intervention of cardiovascular diseases. This work proposes a method based on a deformable model for reconstructing coronary arteries from two monoplane angiographic images acquired from different angles. First, an external force back-projective composition model is developed to determine the external force, for which the force distributions in different views are back-projected to the 3D space and composited in the same coordinate system based on the perspective projection principle of x-ray imaging. The elasticity and bending forces are composited as an internal force to maintain the smoothness of the deformable curve. Second, the deformable curve evolves rapidly toward the true vascular centerlines in 3D space and angiographic images under the combination of internal and external forces. Third, densely matched correspondence among vessel centerlines is constructed using a curve alignment method. The bundle adjustment method is then utilized for the global optimization of the projection parameters and the 3D structures. The proposed method is validated on phantom data and routine angiographic images with consideration for space and re-projection image errors. Experimental results demonstrate the effectiveness and robustness of the proposed method for the reconstruction of coronary arteries from two monoplane angiographic images. The proposed method can achieve a mean space error of 0.564 mm and a mean re-projection error of 0.349 mm. (paper)

  3. A theoretically exact reconstruction algorithm for helical cone-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Li Jing; Sun Yi; Zhu Peiping

    2013-01-01

    Differential phase-contrast computed tomography (DPC-CT) reconstruction problems are usually solved by using parallel-, fan- or cone-beam algorithms. For rod-shaped objects, the x-ray beams cannot recover all the slices of the sample at the same time. Thus, if a rod-shaped sample is required to be reconstructed by the above algorithms, one should alternately perform translation and rotation on this sample, which leads to lower efficiency. The helical cone-beam CT may significantly improve scanning efficiency for rod-shaped objects over other algorithms. In this paper, we propose a theoretically exact filter-backprojection algorithm for helical cone-beam DPC-CT, which can be applied to reconstruct the refractive index decrement distribution of the samples directly from two-dimensional differential phase-contrast images. Numerical simulations are conducted to verify the proposed algorithm. Our work provides a potential solution for inspecting the rod-shaped samples using DPC-CT, which may be applicable with the evolution of DPC-CT equipments. (paper)

  4. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    Science.gov (United States)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  5. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    Science.gov (United States)

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  6. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    Science.gov (United States)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  7. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  8. Performance comparison of OpenCL and CUDA by benchmarking an optimized perspective backprojection

    Energy Technology Data Exchange (ETDEWEB)

    Swall, Stefan; Ritschl, Ludwig; Knaup, Michael; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    The increase in performance of Graphical Processing Units (GPUs) and the onward development of dedicated software tools within the last decade allows to transfer performance-demanding computations from the Central Processing Unit (CPU) to the GPU and to speed up certain tasks by utilizing the massiv parallel architecture of these devices. The Computate Unified Device Architecture (CUDA) developed by NVIDIA provides an easy hence effective way to develop application that target NVIDIA GPUs. It has become one of the cardinal software tools for this purpose. Recently the Open Computing Language (OpenCL) became available that is neither vendor-specific nor limited to GPUs only. As the benefits of CUDA-based image reconstruction are well known we aim at providing a comparison between the performance that can be achieved with CUDA in comparison to OpenCL by benchmarking the time required to perform a simple but computationally demanding task: the perspective backprojection. (orig.)

  9. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    Science.gov (United States)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  10. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  11. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    Science.gov (United States)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  12. Comparison of forward- and back-projection in vivo EPID dosimetry for VMAT treatment of the prostate

    Science.gov (United States)

    Bedford, James L.; Hanson, Ian M.; Hansen, Vibeke N.

    2018-01-01

    In the forward-projection method of portal dosimetry for volumetric modulated arc therapy (VMAT), the integrated signal at the electronic portal imaging device (EPID) is predicted at the time of treatment planning, against which the measured integrated image is compared. In the back-projection method, the measured signal at each gantry angle is back-projected through the patient CT scan to give a measure of total dose to the patient. This study aims to investigate the practical agreement between the two types of EPID dosimetry for prostate radiotherapy. The AutoBeam treatment planning system produced VMAT plans together with corresponding predicted portal images, and a total of 46 sets of gantry-resolved portal images were acquired in 13 patients using an iViewGT portal imager. For the forward-projection method, each acquisition of gantry-resolved images was combined into a single integrated image and compared with the predicted image. For the back-projection method, iViewDose was used to calculate the dose distribution in the patient for comparison with the planned dose. A gamma index for 3% and 3 mm was used for both methods. The results were investigated by delivering the same plans to a phantom and repeating some of the deliveries with deliberately introduced errors. The strongest agreement between forward- and back-projection methods is seen in the isocentric intensity/dose difference, with moderate agreement in the mean gamma. The strongest correlation is observed within a given patient, with less correlation between patients, the latter representing the accuracy of prediction of the two methods. The error study shows that each of the two methods has its own distinct sensitivity to errors, but that overall the response is similar. The forward- and back-projection EPID dosimetry methods show moderate agreement in this series of prostate VMAT patients, indicating that both methods can contribute to the verification of dose delivered to the patient.

  13. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  14. Analytical algorithm for the generation of polygonal projection data for tomographic reconstruction

    International Nuclear Information System (INIS)

    Davis, G.R.

    1996-01-01

    Tomographic reconstruction algorithms and filters can be tested using a mathematical phantom, that is, a computer program which takes numerical data as its input and outputs derived projection data. The input data is usually in the form of pixel ''densities'' over a regular grid, or position and dimensions of simple, geometrical objects. The former technique allows a greater variety of objects to be simulated, but is less suitable in the case when very small (relative to the ray-spacing) features are to be simulated. The second technique is normally used to simulate biological specimens, typically a human skull, modelled as a number of ellipses. This is not suitable for simulating non-biological specimens with features such as straight edges and fine cracks. We have therefore devised an algorithm for simulating objects described as a series of polygons. These polygons, or parts of them, may be smaller than the ray-spacing and there is no limit, except that imposed by computing resources, on the complexity, number or superposition of polygons. A simple test of such a phantom, reconstructed using the filtered back-projection method, revealed reconstruction artefacts not normally seen with ''biological'' phantoms. (orig.)

  15. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  16. Evaluation of a mechanistic algorithm to calculate the influence of a shallow water table on hydrology sediment and pesticide transport through vegetative filter strips

    Science.gov (United States)

    Lauvernet, C.; Munoz-Carpena, R.; Carluer, N.

    2012-04-01

    Natural or introduced areas of vegetation, also known as vegetative filter strips (VFS), are a common environmental control practice to protect surface water bodies from human influence. In Europe, VFS are placed along the water network to protect from agrochemical drift during applications, in addition to runoff control. Their bottomland placement next to the streams often implies the presence of a seasonal shallow water table which can have a profound impact on the efficiency of the buffer zone (Lacas et al. 2005). A physically-based algorithm describing ponded infiltration into soils bounded by a water table, proposed by Salvucci and Enthekabi (1995), was further developed to simulate VFS dynamics by making it explicit in time, account for unsteady rainfall conditions, and by coupling to a numerical overland flow and transport model (VFSMOD) (Munoz-Carpena et al., submitted). In this study, we evaluate the importance of the presence of a shallow water table on filter efficiency (reductions in runoff, sediment and pesticide mass), in the context of all other input factors used to describe the system. Global sensitivity analysis (GSA) was used to rank the important input factors and the presence of interactions, as well as the contribution of the important factors to the output variance. GSA of VSFMOD modified for shallow water table was implemented on 2 sites selected in France because they represent different agro-pedo-climatic conditions for which we can compare the role of the factors influencing the performance of grassed buffer strips for surface runoff, sediment and pesticide removal. The first site at Morcille watershed in the Beaujolais wineyard (Rhône-Alpes) contains a very permeable sandy-clay with water table depth varying with the season (very deep in summer and shallow in winter), with a high slope (20 to 30%), and subject to strong seasonal storms (semi-continental, Mediterranean climate). The second site at La Jailliere (Loire-Atlantique, ARVALIS

  17. Vascular diameter measurement in CT angiography: comparison of model-based iterative reconstruction and standard filtered back projection algorithms in vitro.

    Science.gov (United States)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2013-03-01

    The purpose of this study was to evaluate the performance of model-based iterative reconstruction (MBIR) in measurement of the inner diameter of models of blood vessels and compare performance between MBIR and a standard filtered back projection (FBP) algorithm. Vascular models with wall thicknesses of 0.5, 1.0, and 1.5 mm were scanned with a 64-MDCT unit and densities of contrast material yielding 275, 396, and 542 HU. Images were reconstructed images by MBIR and FBP, and the mean diameter of each model vessel was measured by software automation. Twenty separate measurements were repeated for each vessel, and variance among the repeated measures was analyzed for determination of measurement error. For all nine model vessels, CT attenuation profiles were compared along a line passing through the luminal center on axial images reconstructed with FBP and MBIR, and the 10-90% edge rise distances at the boundary between the vascular wall and the lumen were evaluated. For images reconstructed with FBP, measurement errors were smallest for models with 1.5-mm wall thickness, except those filled with 275-HU contrast material, and errors grew as the density of the contrast material decreased. Measurement errors with MBIR were comparable to or less than those with FBP. In CT attenuation profiles of images reconstructed with MBIR, the 10-90% edge rise distances at the boundary between the lumen and vascular wall were relatively short for each vascular model compared with those of the profile curves of FBP images. MBIR is better than standard FBP for reducing reconstruction blur and improving the accuracy of diameter measurement at CT angiography.

  18. A unified analysis of FBP-based algorithms in helical cone-beam and circular cone- and fan-beam scans

    International Nuclear Information System (INIS)

    Pan Xiaochuan; Xia Dan; Zou Yu; Yu Lifeng

    2004-01-01

    A circular scanning trajectory is and will likely remain a popular choice of trajectory in computed tomography (CT) imaging because it is easy to implement and control. Filtered-backprojection (FBP)-based algorithms have been developed previously for approximate and exact reconstruction of the entire image or a region of interest within the image in circular cone-beam and fan-beam cases. Recently, we have developed a 3D FBP-based algorithm for image reconstruction on PI-line segments in a helical cone-beam scan. In this work, we demonstrated that the 3D FBP-based algorithm indeed provided a rather general formulation for image reconstruction from divergent projections (such as cone-beam and fan-beam projections). On the basis of this formulation we derived new approximate or exact algorithms for image reconstruction in circular cone-beam or fan-beam scans, which can be interpreted as special cases of the helical scan. Existing algorithms corresponding to the derived algorithms were identified. We also performed a preliminary numerical study to verify our theoretical results in each of the cases. The results in the work can readily be generalized to other non-circular trajectories

  19. Correction of computed tomography motion artifacts using pixel-specific back-projection

    International Nuclear Information System (INIS)

    Ritchie, C.J.; Crawford, C.R.; Godwin, J.D.; Kim, Y. King, K.F.

    1996-01-01

    Cardiac and respiratory motion can cause artifacts in computed tomography scans of the chest. The authors describe a new method for reducing these artifacts called pixel-specific back-projection (PSBP). PSBP reduces artifacts caused by in-plane motion by reconstructing each pixel in a frame of reference that moves with the in-plane motion in the volume being scanned. The motion of the frame of reference is specified by constructing maps that describe the motion of each pixel in the image at the time each projection was measured; these maps are based on measurements of the in-plane motion. PSBP has been tested in computer simulations and with volunteer data. In computer simulations, PSBP removed the structured artifacts caused by motion. In scans of two volunteers, PSBP reduced doubling and streaking in chest scans to a level that made the images clinically useful. PSBP corrections of liver scans were less satisfactory because the motion of the liver is predominantly superior-inferior (S-I). PSBP uses a unique set of motion parameters to describe the motion at each point in the chest as opposed to requiring that the motion be described by a single set of parameters. Therefore, PSBP may be more useful in correcting clinical scans than are other correction techniques previously described

  20. Statistical noise with the weighted backprojection method for single photon emission computed tomography

    International Nuclear Information System (INIS)

    Murayama, Hideo; Tanaka, Eiichi; Toyama, Hinako.

    1985-01-01

    The weighted backprojection (WBP) method and the radial post-correction (RPC) method were compared with other several attenuation correction methods for single photon emission computed tomography by computer simulation. These methods are the pre-correction method with arithmetic means of opposing projections, the post-correction method with a correction matrix, and the inverse attenuated Randon transform method. Statistical mean square noise in a reconstructed image was formulated, and was displayed two-dimensionally for typical simulated phantoms. The noise image for the WBP method was dependent on several parameters, namely, size of an attenuating object, distribution of activity, the attenuation coefficient, and choise of the reconstruction index, k and position of the reconstruction origin. The noise image for the WBP method with k=0 was almost the same for the RPC method. It has been shown that position of the reconstruction origin has to be chosen appropriately in order to improve the noise properties of the reconstructed image for the WBP method as well as the RPC method. Comparision of the different attenuation correction methods accomplished by using both the reconstructed images and the statistical noise images with the same mathematical phantom and convolving function concluded that the WBP method and the RPC method were more amenable to any radioisotope distributions than the other methods, and had the advantage of flexibility to improve image noise of any local positions. (author)

  1. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  2. An efficient reconstruction algorithm for differential phase-contrast tomographic images from a limited number of views

    International Nuclear Information System (INIS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Gupta, Rajiv; Ando, Masami

    2015-01-01

    The main focus of this paper is reconstruction of tomographic phase-contrast image from a set of projections. We propose an efficient reconstruction algorithm for differential phase-contrast computed tomography that can considerably reduce the number of projections required for reconstruction. The key result underlying this research is a projection theorem that states that the second derivative of the projection set is linearly related to the Laplacian of the tomographic image. The proposed algorithm first reconstructs the Laplacian image of the phase-shift distribution from the second-derivative of the projections using total variation regularization. The second step is to obtain the phase-shift distribution by solving a Poisson equation whose source is the Laplacian image previously reconstructed under the Dirichlet condition. We demonstrate the efficacy of this algorithm using both synthetically generated simulation data and projection data acquired experimentally at a synchrotron. The experimental phase data were acquired from a human coronary artery specimen using dark-field-imaging optics pioneered by our group. Our results demonstrate that the proposed algorithm can reduce the number of projections to approximately 33% as compared with the conventional filtered backprojection method, without any detrimental effect on the image quality

  3. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  4. A three-dimensional reconstruction algorithm for an inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Fahrig, Rebecca; Pelc, Norbert J.

    2005-01-01

    An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system

  5. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  6. Implementation of a cone-beam reconstruction algorithm for the single-circle source orbit with embedded misalignment correction using homogeneous coordinates

    International Nuclear Information System (INIS)

    Karolczak, Marek; Schaller, Stefan; Engelke, Klaus; Lutz, Andreas; Taubenreuther, Ulrike; Wiesent, Karl; Kalender, Willi

    2001-01-01

    We present an efficient implementation of an approximate cone-beam image reconstruction algorithm for application in tomography, which accounts for scanner mechanical misalignment. The implementation is based on the algorithm proposed by Feldkamp et al. [J. Opt. Soc. Am. A 6, 612-619 (1984)] and is directed at circular scan paths. The algorithm has been developed for the purpose of reconstructing volume data from projections acquired in an experimental x-ray microtomography (μCT) scanner [Engelke et al., Der Radiologe 39, 203-212 (1999)]. To mathematically model misalignment we use matrix notation with homogeneous coordinates to describe the scanner geometry, its misalignment, and the acquisition process. For convenience analysis is carried out for x-ray CT scanners, but it is applicable to any tomographic modality, where two-dimensional projection acquisition in cone beam geometry takes place, e.g., single photon emission computerized tomography. We derive an algorithm assuming misalignment errors to be small enough to weight and filter original projections and to embed compensation for misalignment in the backprojection. We verify the algorithm on simulations of virtual phantoms and scans of a physical multidisk (Defrise) phantom

  7. Adaptive digital filters

    CERN Document Server

    Kovačević, Branko; Milosavljević, Milan

    2013-01-01

    “Adaptive Digital Filters” presents an important discipline applied to the domain of speech processing. The book first makes the reader acquainted with the basic terms of filtering and adaptive filtering, before introducing the field of advanced modern algorithms, some of which are contributed by the authors themselves. Working in the field of adaptive signal processing requires the use of complex mathematical tools. The book offers a detailed presentation of the mathematical models that is clear and consistent, an approach that allows everyone with a college level of mathematics knowledge to successfully follow the mathematical derivations and descriptions of algorithms.   The algorithms are presented in flow charts, which facilitates their practical implementation. The book presents many experimental results and treats the aspects of practical application of adaptive filtering in real systems, making it a valuable resource for both undergraduate and graduate students, and for all others interested in m...

  8. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  9. Block matching 3D random noise filtering for absorption optical projection tomography

    International Nuclear Information System (INIS)

    Fumene Feruglio, P; Vinegoni, C; Weissleder, R; Gros, J; Sbarbati, A

    2010-01-01

    Absorption and emission optical projection tomography (OPT), alternatively referred to as optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT), are recently developed three-dimensional imaging techniques with value for developmental biology and ex vivo gene expression studies. The techniques' principles are similar to the ones used for x-ray computed tomography and are based on the approximation of negligible light scattering in optically cleared samples. The optical clearing is achieved by a chemical procedure which aims at substituting the cellular fluids within the sample with a cell membranes' index matching solution. Once cleared the sample presents very low scattering and is then illuminated with a light collimated beam whose intensity is captured in transillumination mode by a CCD camera. Different projection images of the sample are subsequently obtained over a 360 0 full rotation, and a standard backprojection algorithm can be used in a similar fashion as for x-ray tomography in order to obtain absorption maps. Because not all biological samples present significant absorption contrast, it is not always possible to obtain projections with a good signal-to-noise ratio, a condition necessary to achieve high-quality tomographic reconstructions. Such is the case for example, for early stage's embryos. In this work we demonstrate how, through the use of a random noise removal algorithm, the image quality of the reconstructions can be considerably improved even when the noise is strongly present in the acquired projections. Specifically, we implemented a block matching 3D (BM3D) filter applying it separately on each acquired transillumination projection before performing a complete three-dimensional tomographical reconstruction. To test the efficiency of the adopted filtering scheme, a phantom and a real biological sample were processed. In both cases, the BM3D filter led to a signal-to-noise ratio increment of over 30 d

  10. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  11. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    Science.gov (United States)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  12. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  13. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    Directory of Open Access Journals (Sweden)

    Peigang Ning

    Full Text Available OBJECTIVE: This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR and model-based iterative reconstruction (MBIR algorithms in reducing computed tomography (CT radiation dosages in abdominal imaging. METHODS: CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP, 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol were recorded. RESULTS: At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. CONCLUSIONS: Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  14. Filter arrays

    Science.gov (United States)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  15. Adaptable Iterative and Recursive Kalman Filter Schemes

    Science.gov (United States)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  16. Variability of left ventricular ejection fraction and volumes with quantitative gated SPECT: influence of algorithm, pixel size and reconstruction parameters in small and normal-sized hearts

    International Nuclear Information System (INIS)

    Hambye, Anne-Sophie; Vervaet, Ann; Dobbeleir, Andre

    2004-01-01

    Several software packages are commercially available for quantification of left ventricular ejection fraction (LVEF) and volumes from myocardial gated single-photon emission computed tomography (SPECT), all of which display a high reproducibility. However, their accuracy has been questioned in patients with a small heart. This study aimed to evaluate the performances of different software and the influence of modifications in acquisition or reconstruction parameters on LVEF and volume measurements, depending on the heart size. In 31 patients referred for gated SPECT, 64 2 and 128 2 matrix acquisitions were consecutively obtained. After reconstruction by filtered back-projection (Butterworth, 0.4, 0.5 or 0.6 cycles/cm cut-off, order 6), LVEF and volumes were computed with different software [three versions of Quantitative Gated SPECT (QGS), the Emory Cardiac Toolbox (ECT) and the Stanford University (SU-Segami) Medical School algorithm] and processing workstations. Depending upon their end-systolic volume (ESV), patients were classified into two groups: group I (ESV>30 ml, n=14) and group II (ESV 2 to 128 2 were associated with significantly larger volumes as well as lower LVEF values. Increasing the filter cut-off frequency had the same effect. With SU-Segami, a larger matrix was associated with larger end-diastolic volumes and smaller ESVs, resulting in a highly significant increase in LVEF. Increasing the filter sharpness, on the other hand, had no influence on LVEF though the measured volumes were significantly larger. (orig.)

  17. Statistically-Efficient Filtering in Impulsive Environments: Weighted Myriad Filters

    Directory of Open Access Journals (Sweden)

    Juan G. Gonzalez

    2002-01-01

    Full Text Available Linear filtering theory has been largely motivated by the characteristics of Gaussian signals. In the same manner, the proposed Myriad Filtering methods are motivated by the need for a flexible filter class with high statistical efficiency in non-Gaussian impulsive environments that can appear in practice. Myriad filters have a solid theoretical basis, are inherently more powerful than median filters, and are very general, subsuming traditional linear FIR filters. The foundation of the proposed filtering algorithms lies in the definition of the myriad as a tunable estimator of location derived from the theory of robust statistics. We prove several fundamental properties of this estimator and show its optimality in practical impulsive models such as the α-stable and generalized-t. We then extend the myriad estimation framework to allow the use of weights. In the same way as linear FIR filters become a powerful generalization of the mean filter, filters based on running myriads reach all of their potential when a weighting scheme is utilized. We derive the “normal” equations for the optimal myriad filter, and introduce a suboptimal methodology for filter tuning and design. The strong potential of myriad filtering and estimation in impulsive environments is illustrated with several examples.

  18. Optimal Nonlinear Filter for INS Alignment

    Institute of Scientific and Technical Information of China (English)

    赵瑞; 顾启泰

    2002-01-01

    All the methods to handle the inertial navigation system (INS) alignment were sub-optimal in the past. In this paper, particle filtering (PF) as an optimal method is used for solving the problem of INS alignment. A sub-optimal two-step filtering algorithm is presented to improve the real-time performance of PF. The approach combines particle filtering with Kalman filtering (KF). Simulation results illustrate the superior performance of these approaches when compared with extended Kalman filtering (EKF).

  19. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  20. Comment on "Localized water reverberation phases and its impact on back-projection images" by Yue et al. [2017

    Science.gov (United States)

    Fan, W.; Shearer, P. M.

    2017-12-01

    Fan and Shearer [2016] analyzed the 2012 Mw 7.2 Sumatra earthquake and reported that the earthquake dynamically triggered early aftershock/aftershocks 150 km away from the mainshock and 50 s later. The early aftershock/aftershocks were detected with teleseismic P-wave back-projection, coincided with passing surface waves, and showed observable seismic waveforms in a wide frequency range (0.02—5 Hz). Recently, however, Yue et al. [2017] interpreted these coda arrivals as water reverberations from the mainshock, based mostly on EGF analysis of a nearby M6 earthquake and a water-phase synthetic test. Here, we show detailed back-projection and waveform analysis of three M6 earthquakes within 100km of the Mw 7.2 earthquake, including the EGF event analyzed in Yue et al. [2017]. In addition, we examine the waveforms of three M5.5 reverse faulting earthquakes close to our detected early aftershock landward of the trench. Our results show that the coda energy in question is more likely caused by a separate earthquake near the trench than by a mainshock water reverberation phase, thus supporting our earlier conclusion that the detected coherent radiators are likely to be dynamically triggered early aftershock/aftershocks.

  1. Derivative free filtering using Kalmtool

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Hansen, Søren; Ravn, Ole

    2010-01-01

    In this paper we present a toolbox enabling easy evaluation and comparison of different filtering algorithms. The toolbox is called Kalmtool 4 and is a set of MATLAB tools for state estimation of nonlinear systems. The toolbox contains functions for extended Kalman filtering as well as for DD1 fi...

  2. Study of the Algorithm of Backtracking Decoupling and Adaptive Extended Kalman Filter Based on the Quaternion Expanded to the State Variable for Underwater Glider Navigation

    Directory of Open Access Journals (Sweden)

    Haoqian Huang

    2014-12-01

    Full Text Available High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF based on the quaternion expanded to the state variable (BD-AEKF. The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.

  3. Study of the algorithm of backtracking decoupling and adaptive extended Kalman filter based on the quaternion expanded to the state variable for underwater glider navigation.

    Science.gov (United States)

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-12-03

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.

  4. Study of the Algorithm of Backtracking Decoupling and Adaptive Extended Kalman Filter Based on the Quaternion Expanded to the State Variable for Underwater Glider Navigation

    Science.gov (United States)

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-01-01

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method. PMID:25479331

  5. Perspectives on Nonlinear Filtering

    KAUST Repository

    Law, Kody

    2015-01-01

    The solution to the problem of nonlinear filtering may be given either as an estimate of the signal (and ideally some measure of concentration), or as a full posterior distribution. Similarly, one may evaluate the fidelity of the filter either by its ability to track the signal or its proximity to the posterior filtering distribution. Hence, the field enjoys a lively symbiosis between probability and control theory, and there are plenty of applications which benefit from algorithmic advances, from signal processing, to econometrics, to large-scale ocean, atmosphere, and climate modeling. This talk will survey some recent theoretical results involving accurate signal tracking with noise-free (degenerate) dynamics in high-dimensions (infinite, in principle, but say d between 103 and 108 , depending on the size of your application and your computer), and high-fidelity approximations of the filtering distribution in low dimensions (say d between 1 and several 10s).

  6. Perspectives on Nonlinear Filtering

    KAUST Repository

    Law, Kody

    2015-01-07

    The solution to the problem of nonlinear filtering may be given either as an estimate of the signal (and ideally some measure of concentration), or as a full posterior distribution. Similarly, one may evaluate the fidelity of the filter either by its ability to track the signal or its proximity to the posterior filtering distribution. Hence, the field enjoys a lively symbiosis between probability and control theory, and there are plenty of applications which benefit from algorithmic advances, from signal processing, to econometrics, to large-scale ocean, atmosphere, and climate modeling. This talk will survey some recent theoretical results involving accurate signal tracking with noise-free (degenerate) dynamics in high-dimensions (infinite, in principle, but say d between 103 and 108 , depending on the size of your application and your computer), and high-fidelity approximations of the filtering distribution in low dimensions (say d between 1 and several 10s).

  7. Rectifier Filters

    Directory of Open Access Journals (Sweden)

    Y. A. Bladyko

    2010-01-01

    Full Text Available The paper contains definition of a smoothing factor which is suitable for any rectifier filter. The formulae of complex smoothing factors have been developed for simple and complex passive filters. The paper shows conditions for application of calculation formulae and filters

  8. Mitigating artifacts in back-projection source imaging with implications for frequency-dependent properties of the Tohoku-Oki earthquake

    Science.gov (United States)

    Meng, Lingsen; Ampuero, Jean-Paul; Luo, Yingdi; Wu, Wenbo; Ni, Sidao

    2012-12-01

    Comparing teleseismic array back-projection source images of the 2011 Tohoku-Oki earthquake with results from static and kinematic finite source inversions has revealed little overlap between the regions of high- and low-frequency slip. Motivated by this interesting observation, back-projection studies extended to intermediate frequencies, down to about 0.1 Hz, have suggested that a progressive transition of rupture properties as a function of frequency is observable. Here, by adapting the concept of array response function to non-stationary signals, we demonstrate that the "swimming artifact", a systematic drift resulting from signal non-stationarity, induces significant bias on beamforming back-projection at low frequencies. We introduce a "reference window strategy" into the multitaper-MUSIC back-projection technique and significantly mitigate the "swimming artifact" at high frequencies (1 s to 4 s). At lower frequencies, this modification yields notable, but significantly smaller, artifacts than time-domain stacking. We perform extensive synthetic tests that include a 3D regional velocity model for Japan. We analyze the recordings of the Tohoku-Oki earthquake at the USArray and at the European array at periods from 1 s to 16 s. The migration of the source location as a function of period, regardless of the back-projection methods, has characteristics that are consistent with the expected effect of the "swimming artifact". In particular, the apparent up-dip migration as a function of frequency obtained with the USArray can be explained by the "swimming artifact". This indicates that the most substantial frequency-dependence of the Tohoku-Oki earthquake source occurs at periods longer than 16 s. Thus, low-frequency back-projection needs to be further tested and validated in order to contribute to the characterization of frequency-dependent rupture properties.

  9. A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon, BP

    2012-07-01

    Full Text Available stream_source_info Salmon2_2012.pdf.txt stream_content_type text/plain stream_size 16400 Content-Encoding ISO-8859-1 stream_name Salmon2_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 A SEARCH ALGORITHM TO META... the spectral bands separately and introduced a meta-optimization method for the EKF that will be called the Bias Variance Equilibrium Point (BVEP) in this paper. The objective of this paper is to introduce an unsuper- vised search algorithm called the Bias...

  10. Novel Simplex Unscented Transform and Filter

    Institute of Scientific and Technical Information of China (English)

    Wan-Chun Li; Ping Wei; Xian-Ci Xiao

    2008-01-01

    In this paper, a new simplex unscented transform (UT) based Schmidt orthogonal algorithm and a new filter method based on this transform are proposed. This filter has less computation consumption than UKF (unscented Kalman filter), SUKF (simplex unscented Kalman filter) and EKF (extended Kalman filter). Computer simulation shows that this filter has the same performance as UKF and SUKF, and according to the analysis of the computational requirements of EKF, UKF and SUKF, this filter has preferable practicality value. Finally, the appendix shows the efficiency for this UT.

  11. Kalman Filtering with Real-Time Applications

    CERN Document Server

    Chui, Charles K

    2009-01-01

    Kalman Filtering with Real-Time Applications presents a thorough discussion of the mathematical theory and computational schemes of Kalman filtering. The filtering algorithms are derived via different approaches, including a direct method consisting of a series of elementary steps, and an indirect method based on innovation projection. Other topics include Kalman filtering for systems with correlated noise or colored noise, limiting Kalman filtering for time-invariant systems, extended Kalman filtering for nonlinear systems, interval Kalman filtering for uncertain systems, and wavelet Kalman filtering for multiresolution analysis of random signals. Most filtering algorithms are illustrated by using simplified radar tracking examples. The style of the book is informal, and the mathematics is elementary but rigorous. The text is self-contained, suitable for self-study, and accessible to all readers with a minimum knowledge of linear algebra, probability theory, and system engineering.

  12. A two-level approach to VLBI terrestrial and celestial reference frames using both least-squares adjustment and Kalman filter algorithms

    Science.gov (United States)

    Soja, B.; Krasna, H.; Boehm, J.; Gross, R. S.; Abbondanza, C.; Chin, T. M.; Heflin, M. B.; Parker, J. W.; Wu, X.

    2017-12-01

    The most recent realizations of the ITRS include several innovations, two of which are especially relevant to this study. On the one hand, the IERS ITRS combination center at DGFI-TUM introduced a two-level approach with DTRF2014, consisting of a classical deterministic frame based on normal equations and an optional coordinate time series of non-tidal displacements calculated from geophysical loading models. On the other hand, the JTRF2014 by the combination center at JPL is a time series representation of the ITRF determined by Kalman filtering. Both the JTRF2014 and the second level of the DTRF2014 are thus able to take into account short-term variations in the station coordinates. In this study, based on VLBI data, we combine these two approaches, applying them to the determination of both terrestrial and celestial reference frames. Our product has two levels like DTRF2014, with the second level being a Kalman filter solution like JTRF2014. First, we compute a classical TRF and CRF in a global least-squares adjustment by stacking normal equations from 5446 VLBI sessions between 1979 and 2016 using the Vienna VLBI and Satellite Software VieVS (solution level 1). Next, we obtain coordinate residuals from the global adjustment by applying the level-1 TRF and CRF in the single-session analysis and estimating coordinate offsets. These residuals are fed into a Kalman filter and smoother, taking into account the stochastic properties of the individual stations and radio sources. The resulting coordinate time series (solution level 2) serve as an additional layer representing irregular variations not considered in the first level of our approach. Both levels of our solution are implemented in VieVS in order to test their individual and combined performance regarding the repeatabilities of estimated baseline lengths, EOP, and radio source coordinates.

  13. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Science.gov (United States)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  14. Inter-Dye Distance Distributions Studied by a Combination of Single-Molecule FRET-Filtered Lifetime Measurements and a Weighted Accessible Volume (wAV Algorithm

    Directory of Open Access Journals (Sweden)

    Henning Höfig

    2014-11-01

    Full Text Available Förster resonance energy transfer (FRET is an important tool for studying the structural and dynamical properties of biomolecules. The fact that both the internal dynamics of the biomolecule and the movements of the biomolecule-attached dyes can occur on similar timescales of nanoseconds is an inherent problem in FRET studies. By performing single-molecule FRET-filtered lifetime measurements, we are able to characterize the amplitude of the motions of fluorescent probes attached to double-stranded DNA standards by means of flexible linkers. With respect to previously proposed experimental approaches, we improved the precision and the accuracy of the inter-dye distance distribution parameters by filtering out the donor-only population with pulsed interleaved excitation. A coarse-grained model is employed to reproduce the experimentally determined inter-dye distance distributions. This approach can easily be extended to intrinsically flexible proteins allowing, under certain conditions, to decouple the macromolecule amplitude of motions from the contribution of the dye linkers.

  15. The second order extended Kalman filter and Markov nonlinear filter for data processing in interferometric systems

    International Nuclear Information System (INIS)

    Ermolaev, P; Volynsky, M

    2014-01-01

    Recurrent stochastic data processing algorithms using representation of interferometric signal as output of a dynamic system, which state is described by vector of parameters, in some cases are more effective, compared with conventional algorithms. Interferometric signals depend on phase nonlinearly. Consequently it is expedient to apply algorithms of nonlinear stochastic filtering, such as Kalman type filters. An application of the second order extended Kalman filter and Markov nonlinear filter that allows to minimize estimation error is described. Experimental results of signals processing are illustrated. Comparison of the algorithms is presented and discussed.

  16. Adaptive filtering and change detection

    CERN Document Server

    Gustafsson, Fredrik

    2003-01-01

    Adaptive filtering is a classical branch of digital signal processing (DSP). Industrial interest in adaptive filtering grows continuously with the increase in computer performance that allows ever more conplex algorithms to be run in real-time. Change detection is a type of adaptive filtering for non-stationary signals and is also the basic tool in fault detection and diagnosis. Often considered as separate subjects Adaptive Filtering and Change Detection bridges a gap in the literature with a unified treatment of these areas, emphasizing that change detection is a natural extensi

  17. SU-E-T-131: Dosimetric Impact and Evaluation of Different Heterogenity Algorithm in Volumetric Modulated Arc Therapy Plan for Stereotactic Ablative Radiotherapy Lung Treatment with the Flattening Filter Free Beam

    Energy Technology Data Exchange (ETDEWEB)

    Chung, J; Kim, J [Seoul National University Bundang Hospital, Seongnam, Kyeonggi-do (Korea, Republic of); Lee, J [Konkuk University Medical Center, Seoul, Seoul (Korea, Republic of); Kim, Y [Choonhae College of Health Sciences, Ulsan (Korea, Republic of)

    2014-06-01

    Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest.

  18. SU-E-T-131: Dosimetric Impact and Evaluation of Different Heterogenity Algorithm in Volumetric Modulated Arc Therapy Plan for Stereotactic Ablative Radiotherapy Lung Treatment with the Flattening Filter Free Beam

    International Nuclear Information System (INIS)

    Chung, J; Kim, J; Lee, J; Kim, Y

    2014-01-01

    Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest

  19. Multi-Array Back-Projections of The 2015 Gorkha Earthquake With Physics-Based Aftershock Calibrations

    Science.gov (United States)

    Meng, L.; Zhang, A.; Yagi, Y.

    2015-12-01

    The 2015 Mw 7.8 Nepal-Gorkha earthquake with casualties of over 9,000 people is the most devastating disaster to strike Nepal since the 1934 Nepal-Bihar earthquake. Its rupture process is well imaged by the teleseismic MUSIC back-projections (BP). Here, we perform independent back-projections of high-frequency recordings (0.5-2 Hz) from the Australian seismic network (AU), the North America network (NA) and the European seismic network (EU), located in complementary orientations. Our results of all three arrays show unilateral linear rupture path to the east of the hypocenter. But the propagating directions and the inferred rupture speeds differ significantly among different arrays. To understand the spatial uncertainties of the BP analysis, we image four moderate-size (M5~6) aftershocks based on the timing correction derived from the alignment of the initial P-wave of the mainshock. We find that the apparent source locations inferred from BP are systematically biased along the source-array orientation, which can be explained by the uncertainty of the 3D velocity structure deviated from the 1D reference model (e.g. IASP91). We introduced a slowness error term in travel time as a first-order calibration that successfully mitigates the source location discrepancies of different arrays. The calibrated BP results of three arrays are mutually consistent and reveal a unilateral rupture propagating eastward at a speed of 2.7 km/s along the down-dip edge of the locked Himalaya thrust zone over ~ 150 km, in agreement with a narrow slip distribution inferred from finite source inversions.

  20. Manipulation Robustness of Collaborative Filtering Systems

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2009-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions, and hence have become targets of manipulation by unscrupulous vendors. We provide theoretical and empirical results demonstrating that while common nearest neighbor algorithms, which are widely used in commercial systems, can be highly susceptible to manipulation, two classes of collaborative filtering algorithms which we refer to as linear and a...

  1. Kalman filtering with real-time applications

    CERN Document Server

    Chui, Charles K

    2017-01-01

    This new edition presents a thorough discussion of the mathematical theory and computational schemes of Kalman filtering. The filtering algorithms are derived via different approaches, including a direct method consisting of a series of elementary steps, and an indirect method based on innovation projection. Other topics include Kalman filtering for systems with correlated noise or colored noise, limiting Kalman filtering for time-invariant systems, extended Kalman filtering for nonlinear systems, interval Kalman filtering for uncertain systems, and wavelet Kalman filtering for multiresolution analysis of random signals. Most filtering algorithms are illustrated by using simplified radar tracking examples. The style of the book is informal, and the mathematics is elementary but rigorous. The text is self-contained, suitable for self-study, and accessible to all readers with a minimum knowledge of linear algebra, probability theory, and system engineering. Over 100 exercises and problems with solutions help de...

  2. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  3. Adaptive projective filters

    International Nuclear Information System (INIS)

    Dikusar, N.D.

    1993-01-01

    The new approach to solving of the finding problem is proposed. The method is based on Discrete Projective Transformations (DPT), the List Square Fitting (LSF) and uses the information feedback in tracing for linear or quadratic track segments (TS). The fast and stable with respect to measurement errors and background points recurrent algorithm is suggested. The algorithm realizes the family of digital adaptive projective filters (APF) with known nonlinear weight functions-projective invariants. APF can be used in adequate control systems for collection, processing and compression of data, including tracking problems for the wide class of detectors. 10 refs.; 9 figs

  4. Filter apparatus

    International Nuclear Information System (INIS)

    Butterworth, D.J.

    1980-01-01

    This invention relates to liquid filters, precoated by replaceable powders, which are used in the production of ultra pure water required for steam generation of electricity. The filter elements are capable of being installed and removed by remote control so that they can be used in nuclear power reactors. (UK)

  5. Nonlinear Principal Component Analysis Using Strong Tracking Filter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.

  6. RSSI based indoor tracking in sensor networks using Kalman filters

    DEFF Research Database (Denmark)

    Tøgersen, Frede Aakmann; Skjøth, Flemming; Munksgaard, Lene

    2010-01-01

    We propose an algorithm for estimating positions of devices in a sensor network using Kalman filtering techniques. The specific area of application is monitoring the movements of cows in a barn. The algorithm consists of two filters. The first filter enhances the signal-to-noise ratio...

  7. A realization of the RAM digital filter. [Random Access Memory

    Science.gov (United States)

    Zohar, S.

    1976-01-01

    The digital filtering algorithm of W. D. Little, which employs a large RAM to obtain high speed, is implemented in a simple hardware configuration. The nonrecursive version of this filter is compared to the counting digital filter and found to be competitive for low-order filters up to order 7 (8 coefficients).

  8. Time signal filtering by relative neighborhood graph localized linear approximation

    DEFF Research Database (Denmark)

    Sørensen, John Aasted

    1994-01-01

    A time signal filtering algorithm based on the relative neighborhood graph (RNG) used for localization of linear filters is proposed. The filter is constructed from a training signal during two stages. During the first stage an RNG is constructed. During the second stage, localized linear filters...

  9. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  10. Filter systems

    International Nuclear Information System (INIS)

    Vanin, V.R.

    1990-01-01

    The multidetector systems for high resolution gamma spectroscopy are presented. The observable parameters for identifying nuclides produced simultaneously in the reaction are analysed discussing the efficiency of filter systems. (M.C.K.)

  11. z-transform DFT filters and FFT's

    DEFF Research Database (Denmark)

    Bruun, G.

    1978-01-01

    The paper shows how discrete Fourier transformation can be implemented as a filter bank in a way which reduces the number of filter coefficients. A particular implementation of such a filter bank is directly related to the normal complex FFT algorithm. The principle developed further leads to types...... of DFT filter banks which utilize a minimum of complex coefficients. These implementations lead to new forms of FFT's, among which is acos/sinFFT for a real signal which only employs real coefficients. The new FFT algorithms use only half as many real multiplications as does the classical FFT....

  12. Face Recognition using Gabor Filters

    Directory of Open Access Journals (Sweden)

    Sajjad MOHSIN

    2011-01-01

    Full Text Available An Elastic Bunch Graph Map (EBGM algorithm is being proposed in this research paper that successfully implements face recognition using Gabor filters. The proposed system applies 40 different Gabor filters on an image. As aresult of which 40 images with different angles and orientation are received. Next, maximum intensity points in each filtered image are calculated and mark them as Fiducial points. The system reduces these points in accordance to distance between them. The next step is calculating the distances between the reduced points using distance formula. At last, the distances are compared with database. If match occurs, it means that the image is recognized.

  13. Concurrent computation of attribute filters on shared memory parallel machines

    NARCIS (Netherlands)

    Wilkinson, Michael H.F.; Gao, Hui; Hesselink, Wim H.; Jonker, Jan-Eppo; Meijster, Arnold

    2008-01-01

    Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings,

  14. Helium-3 MR q-space imaging with radial acquisition and iterative highly constrained back-projection.

    Science.gov (United States)

    O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B

    2010-01-01

    An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.

  15. SU-E-T-800: Verification of Acurose XB Dose Calculation Algorithm at Air Cavity-Tissue Interface Using Film Measurement for Small Fields of 6-MV Flattening Filter-Free Beams

    International Nuclear Information System (INIS)

    Kang, S; Suh, T; Chung, J

    2015-01-01

    Purpose: To verify the dose accuracy of Acuros XB (AXB) dose calculation algorithm at air-tissue interface using inhomogeneous phantom for 6-MV flattening filter-free (FFF) beams. Methods: An inhomogeneous phantom included air cavity was manufactured for verifying dose accuracy at the air-tissue interface. The phantom was composed with 1 and 3 cm thickness of air cavity. To evaluate the central axis doses (CAD) and dose profiles of the interface, the dose calculations were performed for 3 × 3 and 4 × 4 cm 2 fields of 6 MV FFF beams with AAA and AXB in Eclipse treatment plainning system. Measurements in this region were performed with Gafchromic film. The root mean square errors (RMSE) were analyzed with calculated and measured dose profile. Dose profiles were divided into inner-dose profile (>80%) and penumbra (20% to 80%) region for evaluating RMSE. To quantify the distribution difference, gamma evaluation was used and determined the agreement with 3%/3mm criteria. Results: The percentage differences (%Diffs) between measured and calculated CAD in the interface, AXB shows more agreement than AAA. The %Diffs were increased with increasing the thickness of air cavity size and it is similar for both algorithms. In RMSEs of inner-profile, AXB was more accurate than AAA. The difference was up to 6 times due to overestimation by AAA. RMSEs of penumbra appeared to high difference for increasing the measurement depth. Gamma agreement also presented that the passing rates decreased in penumbra. Conclusion: This study demonstrated that the dose calculation with AXB shows more accurate than with AAA for the air-tissue interface. The 2D dose distributions with AXB for both inner-profile and penumbra showed better agreement than with AAA relative to variation of the measurement depths and air cavity sizes

  16. Three phase active power filter with selective harmonics elimination

    Directory of Open Access Journals (Sweden)

    Sozański Krzysztof

    2016-03-01

    Full Text Available This paper describes a three phase shunt active power filter with selective harmonics elimination. The control algorithm is based on a digital filter bank. The moving Discrete Fourier Transformation is used as an analysis filter bank. The correctness of the algorithm has been verified by simulation and experimental research. The paper includes exemplary results of current waveforms and their spectra from a three phase active power filter.

  17. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  18. Texture classification using autoregressive filtering

    Science.gov (United States)

    Lawton, W. M.; Lee, M.

    1984-01-01

    A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.

  19. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Generalised Filtering

    Directory of Open Access Journals (Sweden)

    Karl Friston

    2010-01-01

    Full Text Available We describe a Bayesian filtering scheme for nonlinear state-space models in continuous time. This scheme is called Generalised Filtering and furnishes posterior (conditional densities on hidden states and unknown parameters generating observed data. Crucially, the scheme operates online, assimilating data to optimize the conditional density on time-varying states and time-invariant parameters. In contrast to Kalman and Particle smoothing, Generalised Filtering does not require a backwards pass. In contrast to variational schemes, it does not assume conditional independence between the states and parameters. Generalised Filtering optimises the conditional density with respect to a free-energy bound on the model's log-evidence. This optimisation uses the generalised motion of hidden states and parameters, under the prior assumption that the motion of the parameters is small. We describe the scheme, present comparative evaluations with a fixed-form variational version, and conclude with an illustrative application to a nonlinear state-space model of brain imaging time-series.

  1. Filter This

    Directory of Open Access Journals (Sweden)

    Audrey Barbakoff

    2011-03-01

    Full Text Available In the Library with the Lead Pipe welcomes Audrey Barbakoff, a librarian at the Milwaukee Public Library, and Ahniwa Ferrari, Virtual Experience Manager at the Pierce County Library System in Washington, for a point-counterpoint piece on filtering in libraries. The opinions expressed here are those of the authors, and are not endorsed by their employers. [...

  2. Images of gravitational and magnetic phenomena derived from two-dimensional back-projection Doppler tomography of interacting binary stars

    International Nuclear Information System (INIS)

    Richards, Mercedes T.; Cocking, Alexander S.; Fisher, John G.; Conover, Marshall J.

    2014-01-01

    We have used two-dimensional back-projection Doppler tomography as a tool to examine the influence of gravitational and magnetic phenomena in interacting binaries that undergo mass transfer from a magnetically active star onto a non-magnetic main-sequence star. This multitiered study of over 1300 time-resolved spectra of 13 Algol binaries involved calculations of the predicted dynamical behavior of the gravitational flow and the dynamics at the impact site, analysis of the velocity images constructed from tomography, and the influence on the tomograms of orbital inclination, systemic velocity, orbital coverage, and shadowing. The Hα tomograms revealed eight sources: chromospheric emission, a gas stream along the gravitational trajectory, a star-stream impact region, a bulge of absorption or emission around the mass-gaining star, a Keplerian accretion disk, an absorption zone associated with hotter gas, a disk-stream impact region, and a hot spot where the stream strikes the edge of a disk. We described several methods used to extract the physical properties of the emission sources directly from the velocity images, including S-wave analysis, the creation of simulated velocity tomograms from hydrodynamic simulations, and the use of synthetic spectra with tomography to sequentially extract the separate sources of emission from the velocity image. In summary, the tomography images have revealed results that cannot be explained solely by gravitational effects: chromospheric emission moving with the mass-losing star, a gas stream deflected from the gravitational trajectory, and alternating behavior between stream state and disk state. Our results demonstrate that magnetic effects cannot be ignored in these interacting binaries.

  3. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  4. Algoritma Filter Kalman untuk Menghaluskan Data Pengukuran

    OpenAIRE

    Rudiyanto; Setiawan, Budi Indra; Saptomo, Satyanto Krido

    2006-01-01

    The objective of this paper is to apply a simple algorithm of Kalman Filter, wich is know as noise data filtering. The computer program was written in Macro Visual Basic in MS Exel. Testings were carried out on available temperature, Water level and force data and then were comared with the mooving average method. The result shows that the algorithm performed better and lesser deviation than the mooving average.

  5. Algoritma Filter Kalman untuk Menghaluskan Data Pengukuran

    Directory of Open Access Journals (Sweden)

    Rudiyanto

    2006-12-01

    Full Text Available The objective of this paper is to apply a simple algorithm of Kalman Filter, wich is know as noise data filtering. The computer program was written in Macro Visual Basic in MS Exel. Testings were carried out on available temperature, Water level and force data and then were comared with the mooving average method. The result shows that the algorithm performed better and lesser deviation than the mooving average.

  6. A Distributional Representation Model For Collaborative Filtering

    OpenAIRE

    Junlin, Zhang; Heng, Cai; Tongwen, Huang; Huiping, Xue

    2015-01-01

    In this paper, we propose a very concise deep learning approach for collaborative filtering that jointly models distributional representation for users and items. The proposed framework obtains better performance when compared against current state-of-art algorithms and that made the distributional representation model a promising direction for further research in the collaborative filtering.

  7. Preliminary Study of Image Reconstruction Algorithm on a Digital Signal Processor

    Science.gov (United States)

    2014-03-01

    5.2 Comparison of CPU-GPU, CPU-FPGA, and CPU-DSP Designs The work for implementing VHDL description of the back-projection algorithm on a physical...FPGA was not complete. Hence, the DSP implementation results are compared with the simulated results for the VHDL design. Simulating VHDL provides an...rather than at the software level. Depending on an application’s characteristics, FPGA implementations can provide a significant performance

  8. On-Line QRS Complex Detection Using Wavelet Filtering

    National Research Council Canada - National Science Library

    Szilagyi, L

    2001-01-01

    ...: first a wavelet transform filtering is applied to the signal, then QRS complex localization is performed using a maximum detection and peak classification algorithm The algorithm has been tested...

  9. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    Science.gov (United States)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  10. Taxonomy-based collaborative filtering algorithms

    NARCIS (Netherlands)

    Gracia, J.; Lozano, E.; Liem, J.; Collarana, D.; Corcho, O.; Gómez-Pérez, A.; Villazón, B.

    2011-01-01

    In DynaLearn, semantics of the QR models ingredients is made explicit by representing them as terms in ontologies. That easies the task of exploring the knowledge contained in the models, enabling rich comparisons among them. The facts that the user explicitly represents in the model constitutes the

  11. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    Science.gov (United States)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  12. Particle Filter Tracking without Dynamics

    Directory of Open Access Journals (Sweden)

    Jaime Ortegon-Aguilar

    2007-01-01

    Full Text Available People tracking is an interesting topic in computer vision. It has applications in industrial areas such as surveillance or human-machine interaction. Particle Filters is a common algorithm for people tracking; challenging situations occur when the target's motion is poorly modelled or with unexpected motions. In this paper, an alternative to address people tracking is presented. The proposed algorithm is based in particle filters, but instead of using a dynamical model, it uses background subtraction to predict future locations of particles. The algorithm is able to track people in omnidirectional sequences with a low frame rate (one or two frames per second. Our approach can tackle unexpected discontinuities and changes in the direction of the motion. The main goal of the paper is to track people from laboratories, but it has applications in surveillance, mainly in controlled environments.

  13. Filtering observations without the initial guess

    Science.gov (United States)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the

  14. Impact of bowtie filter and object position on the two-dimensional noise power spectrum of a clinical MDCT system

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Li, Ke; Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Budde, Adam; Hsieh, Jiang [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 and GE Healthcare, 3000 N Grandview Boulevard, Waukesha, Wisconsin 53188 (United States)

    2016-08-15

    Purpose: Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. Methods: A generalized NPS model was developed to account for the impact of the bowtie filter and image object location in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. Results: (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of

  15. Impact of bowtie filter and object position on the two-dimensional noise power spectrum of a clinical MDCT system

    International Nuclear Information System (INIS)

    Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo; Li, Ke; Chen, Guang-Hong; Budde, Adam; Hsieh, Jiang

    2016-01-01

    Purpose: Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. Methods: A generalized NPS model was developed to account for the impact of the bowtie filter and image object location in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. Results: (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of

  16. Design and implementation of predictive filtering system for current reference generation of active power filter

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Tomislav; Milun, Stanko; Petrovic, Goran [FESB University of Split, Faculty of Electrical Engineering, Machine Engineering and Naval Architecture, R. Boskovica bb, 21000, Split (Croatia)

    2007-02-15

    The shunt active power filters are used to attenuate the harmonic currents in power systems by injecting equal but opposite compensating currents. Successful control of the active filters requires an accurate current reference. In this paper the current reference determination based on predictive filtering structure is presented. Current reference was obtained by taking the difference of load current and its fundamental harmonic. For fundamental harmonic determination with no time delay a combination of digital predictive filter and low pass filter is used. The proposed method was implemented on a laboratory prototype of a three-phase active power filter. The algorithm for current reference determination was adapted and implemented on DSP controller. Simulation and experimental results show that the active power filter with implemented predictive filtering structure gives satisfactory performance in power system harmonic attenuation. (author)

  17. High-resolution backprojection at regional distance: Application to the Haiti M7.0 earthquake and comparisons with finite source studies

    Science.gov (United States)

    Meng, L.; Ampuero, J.-P.; Sladen, A.; Rendon, H.

    2012-04-01

    A catastrophic Mw7 earthquake ruptured on 12 January 2010 on a complex fault system near Port-au-Prince, Haiti. Offshore rupture is suggested by aftershock locations and marine geophysics studies, but its extent remains difficult to define using geodetic and teleseismic observations. Here we perform the multitaper multiple signal classification (MUSIC) analysis, a high-resolution array technique, at regional distance with recordings from the Venezuela National Seismic Network to resolve high-frequency (about 0.4 Hz) aspects of the earthquake process. Our results indicate westward rupture with two subevents, roughly 35 km apart. In comparison, a lower-frequency finite source inversion with fault geometry based on new geologic and aftershock data shows two slip patches with centroids 21 km apart. Apparent source time functions from USArray further constrain the intersubevent time delay, implying a rupture speed of 3.3 km/s. The tips of the slip zones coincide with subevents imaged by backprojections. The different subevent locations found by backprojection and source inversion suggest spatial complementarity between high- and low-frequency source radiation consistent with high-frequency radiation originating from rupture arrest phases at the edges of main slip areas. The centroid moment tensor (CMT) solution and a geodetic-only inversion have similar moment, indicating most of the moment released is captured by geodetic observations and no additional rupture is required beyond where it is imaged in our preferred model. Our results demonstrate the contribution of backprojections of regional seismic array data for earthquakes down to M ≈ 7, especially when incomplete coverage of seismic and geodetic data implies large uncertainties in source inversions.

  18. Simulation for noise cancellation using LMS adaptive filter

    Science.gov (United States)

    Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung

    2017-06-01

    In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.

  19. Q-Method Extended Kalman Filter

    Science.gov (United States)

    Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.

    2012-01-01

    A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.

  20. Bag filters

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, M; Komeda, I; Takizaki, K

    1982-01-01

    Bag filters are widely used throughout the cement industry for recovering raw materials and products and for improving the environment. Their general mechanism, performance and advantages are shown in a classification table, and there are comparisons and explanations. The outer and inner sectional construction of the Shinto ultra-jet collector for pulverized coal is illustrated and there are detailed descriptions of dust cloud prevention, of measures used against possible sources of ignition, of oxygen supply and of other topics. Finally, explanations are given of matters that require careful and comprehensive study when selecting equipment.

  1. Digital filters

    CERN Document Server

    Hamming, Richard W

    1997-01-01

    Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the s

  2. Gabor filter based fingerprint image enhancement

    Science.gov (United States)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  3. MR image reconstruction via guided filter.

    Science.gov (United States)

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  4. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  5. The 3-D alignment of objects in dynamic PET scans using filtered sinusoidal trajectories of sinogram

    International Nuclear Information System (INIS)

    Kostopoulos, Aristotelis E.; Happonen, Antti P.; Ruotsalainen, Ulla

    2006-01-01

    In this study, our goal is to employ a novel 3-D alignment method for dynamic positron emission tomography (PET) scans. Because the acquired data (i.e. sinograms) often contain noise considerably, filtering of the data prior to the alignment presumably improves the final results. In this study, we utilized a novel 3-D stackgram domain approach. In the stackgram domain, the signals along the sinusoidal trajectory signals of the sinogram can be processed separately. In this work, we performed angular stackgram domain filtering by employing well known 1-D filters: the Gaussian low-pass filter and the median filter. In addition, we employed two wavelet de-noising techniques. After filtering we performed alignment of objects in the stackgram domain. The local alignment technique we used is based on similarity comparisons between locus vectors (i.e. the signals along the sinusoidal trajectories of the sinogram) in a 3-D neighborhood of sequences of the stackgrams. Aligned stackgrams can be transformed back to sinograms (Method 1), or alternatively directly to filtered back-projected images (Method 2). In order to evaluate the alignment process, simulated data with different kinds of additive noises were used. The results indicated that the filtering prior to the alignment can be important concerning the accuracy

  6. A Rapid Introduction to Adaptive Filtering

    CERN Document Server

    Vega, Leonardo Rey

    2013-01-01

    In this book, the authors provide insights into the basics of adaptive filtering, which are particularly useful for students taking their first steps into this field. They start by studying the problem of minimum mean-square-error filtering, i.e., Wiener filtering. Then, they analyze iterative methods for solving the optimization problem, e.g., the Method of Steepest Descent. By proposing stochastic approximations, several basic adaptive algorithms are derived, including Least Mean Squares (LMS), Normalized Least Mean Squares (NLMS) and Sign-error algorithms. The authors provide a general framework to study the stability and steady-state performance of these algorithms. The affine Projection Algorithm (APA) which provides faster convergence at the expense of computational complexity (although fast implementations can be used) is also presented. In addition, the Least Squares (LS) method and its recursive version (RLS), including fast implementations are discussed. The book closes with the discussion of severa...

  7. Combination of Wiener filtering and singular value decomposition filtering for volume imaging PET

    International Nuclear Information System (INIS)

    Shao, L.; Lewitt, R.M.; Karp, J.S.

    1995-01-01

    Although the three-dimensional (3D) multi-slice rebinning (MSRB) algorithm in PET is fast and practical, and provides an accurate reconstruction, the MSRB image, in general, suffers from the noise amplified by its singular value decomposition (SVD) filtering operation in the axial direction. Their aim in this study is to combine the use of the Wiener filter (WF) with the SVD to decrease the noise and improve the image quality. The SVD filtering ''deconvolves'' the spatially variant axial response function while the WF suppresses the noise and reduces the blurring not modeled by the axial SVD filter but included in the system modulation transfer function. Therefore, the synthesis of these two techniques combines the advantages of both filters. The authors applied this approach to the volume imaging HEAD PENN-PET brain scanner with an axial extent of 256 mm. This combined filter was evaluated in terms of spatial resolution, image contrast, and signal-to-noise ratio with several phantoms, such as a cold sphere phantom and 3D brain phantom. Specifically, the authors studied both the SVD filter with an axial Wiener filter and the SVD filter with a 3D Wiener filter, and compared the filtered images to those from the 3D reprojection (3DRP) reconstruction algorithm. Their results indicate that the Wiener filter increases the signal-to-noise ratio and also improves the contrast. For the MSRB images of the 3D brain phantom, after 3D WF, both the Gray/White and Gray/Ventricle ratios were improved from 1.8 to 2.8 and 2.1 to 4.1, respectively. In addition, the image quality with the MSRB algorithm is close to that of the 3DRP algorithm with 3D WF applied to both image reconstructions

  8. Metal artifact reduction image reconstruction algorithm for CT of implanted metal orthopedic devices: a work in progress

    International Nuclear Information System (INIS)

    Liu, Patrick T.; Pavlicek, William P.; Peter, Mary B.; Roberts, Catherine C.; Paden, Robert G.; Spangehl, Mark J.

    2009-01-01

    Despite recent advances in CT technology, metal orthopedic implants continue to cause significant artifacts on many CT exams, often obscuring diagnostic information. We performed this prospective study to evaluate the effectiveness of an experimental metal artifact reduction (MAR) image reconstruction program for CT. We examined image quality on CT exams performed in patients with hip arthroplasties as well as other types of implanted metal orthopedic devices. The exam raw data were reconstructed using two different methods, the standard filtered backprojection (FBP) program and the MAR program. Images were evaluated for quality of the metal-cement-bone interfaces, trabeculae ≤1 cm from the metal, trabeculae 5 cm apart from the metal, streak artifact, and overall soft tissue detail. The Wilcoxon Rank Sum test was used to compare the image scores from the large and small prostheses. Interobserver agreement was calculated. When all patients were grouped together, the MAR images showed mild to moderate improvement over the FBP images. However, when the cases were divided by implant size, the MAR images consistently received higher image quality scores than the FBP images for large metal implants (total hip prostheses). For small metal implants (screws, plates, staples), conversely, the MAR images received lower image quality scores than the FBP images due to blurring artifact. The difference of image scores for the large and small implants was significant (p=0.002). Interobserver agreement was found to be high for all measures of image quality (k>0.9). The experimental MAR reconstruction algorithm significantly improved CT image quality for patients with large metal implants. However, the MAR algorithm introduced blurring artifact that reduced image quality with small metal implants. (orig.)

  9. Particle Kalman Filtering: A Nonlinear Bayesian Framework for Ensemble Kalman Filters*

    KAUST Repository

    Hoteit, Ibrahim

    2012-02-01

    This paper investigates an approximation scheme of the optimal nonlinear Bayesian filter based on the Gaussian mixture representation of the state probability distribution function. The resulting filter is similar to the particle filter, but is different from it in that the standard weight-type correction in the particle filter is complemented by the Kalman-type correction with the associated covariance matrices in the Gaussian mixture. The authors show that this filter is an algorithm in between the Kalman filter and the particle filter, and therefore is referred to as the particle Kalman filter (PKF). In the PKF, the solution of a nonlinear filtering problem is expressed as the weighted average of an “ensemble of Kalman filters” operating in parallel. Running an ensemble of Kalman filters is, however, computationally prohibitive for realistic atmospheric and oceanic data assimilation problems. For this reason, the authors consider the construction of the PKF through an “ensemble” of ensemble Kalman filters (EnKFs) instead, and call the implementation the particle EnKF (PEnKF). It is shown that different types of the EnKFs can be considered as special cases of the PEnKF. Similar to the situation in the particle filter, the authors also introduce a resampling step to the PEnKF in order to reduce the risk of weights collapse and improve the performance of the filter. Numerical experiments with the strongly nonlinear Lorenz-96 model are presented and discussed.

  10. Star-sensor-based predictive Kalman filter for satelliteattitude estimation

    Institute of Scientific and Technical Information of China (English)

    林玉荣; 邓正隆

    2002-01-01

    A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.

  11. Evaluation of an aSi-EPID with flattening filter free beams: Applicability to the GLAaS algorithm for portal dosimetry and first experience for pretreatment QA of RapidArc

    International Nuclear Information System (INIS)

    Nicolini, G.; Clivio, A.; Vanetti, E.; Cozzi, L.; Fogliata, A.; Krauss, H.; Fenoglietto, P.

    2013-01-01

    Purpose: To demonstrate the feasibility of portal dosimetry with an amorphous silicon mega voltage imager for flattening filter free (FFF) photon beams by means of the GLAaS methodology and to validate it for pretreatment quality assurance of volumetric modulated arc therapy (RapidArc).Methods: The GLAaS algorithm, developed for flattened beams, was applied to FFF beams of nominal energy of 6 and 10 MV generated by a Varian TrueBeam (TB). The amorphous silicon electronic portal imager [named mega voltage imager (MVI) on TB] was used to generate integrated images that were converted into matrices of absorbed dose to water. To enable GLAaS use under the increased dose-per-pulse and dose-rate conditions of the FFF beams, new operational source-detector-distance (SDD) was identified to solve detector saturation issues. Empirical corrections were defined to account for the shape of the profiles of the FFF beams to expand the original methodology of beam profile and arm backscattering correction. GLAaS for FFF beams was validated on pretreatment verification of RapidArc plans for three different TB linacs. In addition, the first pretreatment results from clinical experience on 74 arcs were reported in terms of γ analysis.Results: MVI saturates at 100 cm SDD for FFF beams but this can be avoided if images are acquired at 150 cm for all nominal dose rates of FFF beams. Rotational stability of the gantry-imager system was tested and resulted in a minimal apparent imager displacement during rotation of 0.2 ± 0.2 mm at SDD = 150 cm. The accuracy of this approach was tested with three different Varian TrueBeam linacs from different institutes. Data were stratified per energy and machine and showed no dependence with beam quality and MLC model. The results from clinical pretreatment quality assurance, provided a gamma agreement index (GAI) in the field area for six and ten FFF beams of (99.8 ± 0.3)% and (99.5 ± 0.6)% with distance to agreement and dose difference criteria

  12. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  13. Modified unscented Kalman filter using modified filter gain and variance scale factor for highly maneuvering target tracking

    Institute of Scientific and Technical Information of China (English)

    Changyun Liu; Penglang Shui; Gang Wei; Song Li

    2014-01-01

    To improve the low tracking precision caused by lagged filter gain or imprecise state noise when the target highly maneu-vers, a modified unscented Kalman filter algorithm based on the improved filter gain and adaptive scale factor of state noise is pre-sented. In every filter process, the estimated scale factor is used to update the state noise covariance Qk, and the improved filter gain is obtained in the filter process of unscented Kalman filter (UKF) via predicted variance Pk|k-1, which is similar to the standard Kalman filter. Simulation results show that the proposed algorithm provides better accuracy and ability to adapt to the highly maneu-vering target compared with the standard UKF.

  14. On filtering over Îto-Volterra observations

    Directory of Open Access Journals (Sweden)

    Michael V. Basin

    2000-01-01

    Full Text Available In this paper, the Kalman-Bucy filter is designed for an Îto-Volterra process over Ito-Volterra observations that cannot be reduced to the case of a differential observation equation. The Kalman-Bucy filter is then designed for an Ito-Volterra process over discontinuous Ito-Volterra observations. Based on the obtained results, the filtering problem over discrete observations with delays is solved. Proofs of the theorems substantiating the filtering algorithms are given.

  15. Adaptive filtering primer with Matlab

    CERN Document Server

    Poularikas, Alexander D

    2006-01-01

    INTRODUCTIONSignal ProcessingAn ExampleOutline of the TextDISCRETE-TIME SIGNAL PROCESSINGDiscrete Time SignalsTransform-Domain Representation of Discrete-Time SignalsThe Z-TransformDiscrete-Time SystemsProblemsHints-Solutions-SuggestionsRANDOM VARIABLES, SEQUENCES, AND STOCHASTIC PROCESSESRandom Signals and DistributionsAveragesStationary ProcessesSpecial Random Signals and Probability Density FunctionsWiener-Khinchin RelationsFiltering Random ProcessesSpecial Types of Random ProcessesNonparametric Spectra EstimationParametric Methods of power Spectral EstimationProblemsHints-Solutions-SuggestionsWIENER FILTERSThe Mean-Square ErrorThe FIR Wiener FilterThe Wiener SolutionWiener Filtering ExamplesProblemsHints-Solutions-SuggestionsEIGENVALUES OF RX - PROPERTIES OF THE ERROR SURFACEThe Eigenvalues of the Correlation MatrixGeometrical Properties of the Error SurfaceProblemsHints-Solutions-SuggestionsNEWTON AND STEEPEST-DESCENT METHODOne-Dimensional Gradient Search MethodSteepest-Descent AlgorithmProblemsHints-Sol...

  16. Efficient Scalable Median Filtering Using Histogram-Based Operations.

    Science.gov (United States)

    Green, Oded

    2018-05-01

    Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.

  17. MR fingerprinting reconstruction with Kalman filter.

    Science.gov (United States)

    Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping

    2017-09-01

    Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Efficient Filtering of Noisy Fingerprint Images

    Directory of Open Access Journals (Sweden)

    Maria Liliana Costin

    2016-01-01

    Full Text Available Fingerprint identification is an important field in the wide domain of biometrics with many applications, in different areas such: judicial, mobile phones, access systems, airports. There are many elaborated algorithms for fingerprint identification, but none of them can guarantee that the results of identification are always 100 % accurate. A first step in a fingerprint image analysing process consists in the pre-processing or filtering. If the result after this step is not by a good quality the upcoming identification process can fail. A major difficulty can appear in case of fingerprint identification if the images that should be identified from a fingerprint image database are noisy with different type of noise. The objectives of the paper are: the successful completion of the noisy digital image filtering, a novel more robust algorithm of identifying the best filtering algorithm and the classification and ranking of the images. The choice about the best filtered images of a set of 9 algorithms is made with a dual method of fuzzy and aggregation model. We are proposing through this paper a set of 9 filters with different novelty designed for processing the digital images using the following methods: quartiles, medians, average, thresholds and histogram equalization, applied all over the image or locally on small areas. Finally the statistics reveal the classification and ranking of the best algorithms.

  19. Convergent Filter Bases

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2015-09-01

    Full Text Available We are inspired by the work of Henri Cartan [16], Bourbaki [10] (TG. I Filtres and Claude Wagschal [34]. We define the base of filter, image filter, convergent filter bases, limit filter and the filter base of tails (fr: filtre des sections.

  20. A tool for automatic generation of RTL-level VHDL description of RNS FIR filters

    DEFF Research Database (Denmark)

    Re, Andrea Del; Nannarelli, Alberto; Re, Marco

    2004-01-01

    Although digital filters based on the Residue Number System (RNS) show high performance and low power dissipation, RNS filters are not widely used in DSP systems, because of the complexity of the algorithms involved. We present a tool to design RNS FIR filters which hides the RNS algorithms to th...

  1. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsson Svärd, Staffan, E-mail: staffan.jacobsson_svard@physics.uu.se; Holcombe, Scott; Grape, Sophie

    2015-05-21

    assessment, which may be particularly useful in the latter application. Two main classes of algorithms are covered; (1) analytic filtered back-projection algorithms, and (2) a group of model-based or algebraic algorithms. For the former class, a basic algorithm has been implemented, which does not take attenuation in the materials of the fuel assemblies into account and which assumes an idealized imaging geometry. In addition, a novel methodology has been presented for introducing a first-order correction to the obtained images for these deficits; in particular, the effects of attenuation are taken into account by modelling the response for an object with a homogeneous mix of fuel materials in the image area. Neither the basic algorithm, nor the correction method requires prior knowledge of the fuel geometry, but they result in images of the assembly's internal activity distribution. Image analysis is then applied to deduce quantitative information. Two algebraic algorithms are also presented, which model attenuation in the fuel assemblies to different degrees; either assuming a homogenous mix of materials in the image area without a priori information or utilizing known information of the assembly geometry and of its position in the measuring setup for modelling the gamma-ray attenuation in detail. Both algorithms model the detection system in detail. The former algorithm returns an image of the cross-section of the object, from which quantitative information is extracted, whereas the latter returns conclusive relative rod-by-rod data. Here, all reconstruction methods are demonstrated on simulated data of a 96-rod fuel assembly in a tomographic measurement setup. The assembly was simulated with the same activity content in all rods for evaluation purposes. Based on the results, it is argued that the choice of algorithm to a large degree depends on application, and also that a combination of reconstruction methods may be useful. A discussion on alternative analysis

  2. The SRT reconstruction algorithm for semiquantification in PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  3. The SRT reconstruction algorithm for semiquantification in PET imaging

    International Nuclear Information System (INIS)

    Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.

    2015-01-01

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of 18 F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  4. MO-FG-204-03: Using Edge-Preserving Algorithm for Significantly Improved Image-Domain Material Decomposition in Dual Energy CT

    International Nuclear Information System (INIS)

    Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L

    2015-01-01

    Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR

  5. Improved Kalman Filter-Based Speech Enhancement with Perceptual Post-Filtering

    Institute of Scientific and Technical Information of China (English)

    WEIJianqiang; DULimin; YANZhaoli; ZENGHui

    2004-01-01

    In this paper, a Kalman filter-based speech enhancement algorithm with some improvements of previous work is presented. A new technique based on spectral subtraction is used for separation speech and noise characteristics from noisy speech and for the computation of speech and noise Autoregressive (AR) parameters. In order to obtain a Kalman filter output with high audible quality, a perceptual post-filter is placed at the output of the Kalman filter to smooth the enhanced speech spectra.Extensive experiments indicate that this newly proposed method works well.

  6. GPU Accelerated Vector Median Filter

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  7. Miniaturized dielectric waveguide filters

    OpenAIRE

    Sandhu, MY; Hunter, IC

    2016-01-01

    Design techniques for a new class of integrated monolithic high-permittivity ceramic waveguide filters are presented. These filters enable a size reduction of 50% compared to air-filled transverse electromagnetic filters with the same unloaded Q-factor. Designs for Chebyshev and asymmetric generalised Chebyshev filter and a diplexer are presented with experimental results for an 1800 MHz Chebyshev filter and a 1700 MHz generalised Chebyshev filter showing excellent agreement with theory.

  8. A parallel implementation of 3-d CT image reconstruction on a hypercube multiprocessor

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.; Cho, Z.H.

    1990-01-01

    In this paper, the authors describe how image reconstruction in computerized tomography (CT) can be parallelized on a message-passing multiprocessor. In particular, the results obtained from parallel implementation of 3-D CT image reconstruction for parallel beam geometries on the Intel hypercube, iPSC/2, are presented. A two stage pipelining approach is employed for filtering (convolution) and backprojection. The conventional sequential convolution algorithm is modified such that the symmetry of the filter kernel is fully utilized for parallelization. In the backprojection stage, the 3-D incremental algorithm, the authors' recently developed backprojection scheme which is shown to be faster than conventional algorithm, is parallelized

  9. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  10. Particle Filtering Applied to Musical Tempo Tracking

    Directory of Open Access Journals (Sweden)

    Macleod Malcolm D

    2004-01-01

    Full Text Available This paper explores the use of particle filters for beat tracking in musical audio examples. The aim is to estimate the time-varying tempo process and to find the time locations of beats, as defined by human perception. Two alternative algorithms are presented, one which performs Rao-Blackwellisation to produce an almost deterministic formulation while the second is a formulation which models tempo as a Brownian motion process. The algorithms have been tested on a large and varied database of examples and results are comparable with the current state of the art. The deterministic algorithm gives the better performance of the two algorithms.

  11. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  12. RAPID TRANSFER ALIGNMENT USING FEDERATED KALMAN FILTER

    Institute of Scientific and Technical Information of China (English)

    GUDong-qing; QINYong-yuan; PENGRong; LIXin

    2005-01-01

    The dimension number of the centralized Kalman filter (CKF) for the rapid transfer alignment (TA) is as high as 21 if the aircraft wing flexure motion is considered in the rapid TA. The 21-dimensional CKF brings the calculation burden on the computer and the difficulty to meet a high filtering updating rate desired by rapid TA. The federated Kalman filter (FKF) for the rapid TA is proposed to solve the dilemma. The structure and the algorithm of the FKF, which can perform parallel computation and has less calculation burden, are designed.The wing flexure motion is modeled, and then the 12-order velocity matching local filter and the 15-order attitud ematching local filter are devised. Simulation results show that the proposed EKE for the rapid TA almost has the same performance as the CKF. Thus the calculation burden of the proposed FKF for the rapid TA is markedly decreased.

  13. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  14. Nonlinear filtering for LIDAR signal processing

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available LIDAR (Laser Integrated Radar is an engineering problem of great practical importance in environmental monitoring sciences. Signal processing for LIDAR applications involves highly nonlinear models and consequently nonlinear filtering. Optimal nonlinear filters, however, are practically unrealizable. In this paper, the Lainiotis's multi-model partitioning methodology and the related approximate but effective nonlinear filtering algorithms are reviewed and applied to LIDAR signal processing. Extensive simulation and performance evaluation of the multi-model partitioning approach and its application to LIDAR signal processing shows that the nonlinear partitioning methods are very effective and significantly superior to the nonlinear extended Kalman filter (EKF, which has been the standard nonlinear filter in past engineering applications.

  15. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  16. Sparse adaptive filters for echo cancellation

    CERN Document Server

    Paleologu, Constantin

    2011-01-01

    Adaptive filters with a large number of coefficients are usually involved in both network and acoustic echo cancellation. Consequently, it is important to improve the convergence rate and tracking of the conventional algorithms used for these applications. This can be achieved by exploiting the sparseness character of the echo paths. Identification of sparse impulse responses was addressed mainly in the last decade with the development of the so-called ``proportionate''-type algorithms. The goal of this book is to present the most important sparse adaptive filters developed for echo cancellati

  17. Kalman filter-based gap conductance modeling

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1983-01-01

    Geometric and thermal property uncertainties contribute greatly to the problem of determining conductance within the fuel-clad gas gap of a nuclear fuel pin. Accurate conductance values are needed for power plant licensing transient analysis and for test analyses at research facilities. Recent work by Meek, Doerner, and Adams has shown that use of Kalman filters to estimate gap conductance is a promising approach. A Kalman filter is simply a mathematical algorithm that employs available system measurements and assumed dynamic models to generate optimal system state vector estimates. This summary addresses another Kalman filter approach to gap conductance estimation and subsequent identification of an empirical conductance model

  18. Power Line Interference Removal from Electrocardiogram Using a Simplified Lattice Based Adaptive IIR Notch Filter

    National Research Council Canada - National Science Library

    Dhillon, Santpal

    2001-01-01

    ...) notch filter with a simplified adaptation algorithm for removal of power line frequency from ECG signals, The performance of this filter is better as compared to a second order infinite impulse response (IIR...

  19. A Performance Comparison Between Extended Kalman Filter and Unscented Kalman Filter in Power System Dynamic State Estimation

    DEFF Research Database (Denmark)

    Khazraj, Hesam; Silva, Filipe Miguel Faria da; Bak, Claus Leth

    2016-01-01

    Dynamic State Estimation (DSE) is a critical tool for analysis, monitoring and planning of a power system. The concept of DSE involves designing state estimation with Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) methods, which can be used by wide area monitoring to improve......-linear state estimator is developed in MatLab to solve states by applying the unscented Kalman filter (UKF) and Extended Kalman Filter (EKF) algorithm. Finally, a DSE model is built for a 14 bus power system network to evaluate the proposed algorithm for the networks.This article will focus on comparing...

  20. Recirculating electric air filter

    Science.gov (United States)

    Bergman, W.

    1985-01-09

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  1. Passive Power Filters

    CERN Document Server

    Künzi, R.

    2015-06-15

    Power converters require passive low-pass filters which are capable of reducing voltage ripples effectively. In contrast to signal filters, the components of power filters must carry large currents or withstand large voltages, respectively. In this paper, three different suitable filter struc tures for d.c./d.c. power converters with inductive load are introduced. The formulas needed to calculate the filter components are derived step by step and practical examples are given. The behaviour of the three discussed filters is compared by means of the examples. P ractical aspects for the realization of power filters are also discussed.

  2. Machine learning of radial basis function neural network based on Kalman filter: Introduction

    Directory of Open Access Journals (Sweden)

    Vuković Najdan L.

    2014-01-01

    Full Text Available This paper analyzes machine learning of radial basis function neural network based on Kalman filtering. Three algorithms are derived: linearized Kalman filter, linearized information filter and unscented Kalman filter. We emphasize basic properties of these estimation algorithms, demonstrate how their advantages can be used for optimization of network parameters, derive mathematical models and show how they can be applied to model problems in engineering practice.

  3. Filter replacement lifetime prediction

    Science.gov (United States)

    Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.

    2017-10-25

    Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.

  4. The Reduced Rank of Ensemble Kalman Filter to Estimate the Temperature of Non Isothermal Continue Stirred Tank Reactor

    OpenAIRE

    Erna Apriliani; Dieky Adzkiya; Arief Baihaqi

    2011-01-01

    Kalman filter is an algorithm to estimate the state variable of dynamical stochastic system. The square root ensemble Kalman filter is an modification of Kalman filter. The square root ensemble Kalman filter is proposed to keep the computational stability and reduce the computational time. In this paper we study the efficiency of the reduced rank ensemble Kalman filter. We apply this algorithm to the non isothermal continue stirred tank reactor problem. We decompose the covariance of the ense...

  5. Optimization of filter loading

    International Nuclear Information System (INIS)

    Turney, J.H.; Gardiner, D.E.; Sacramento Municipal Utility District, Herald, CA)

    1985-01-01

    The introduction of 10 CFR Part 61 has created potential difficulties in the disposal of spent cartridge filters. When this report was prepared, Rancho Seco had no method of packaging and disposing of class B or C filters. This work examined methods to minimize the total operating cost of cartridge filters while maintaining them below the class A limit. It was found that by encapsulating filters in cement the filter operating costs could be minimized

  6. Scheme of adaptive polarization filtering based on Kalman model

    Institute of Scientific and Technical Information of China (English)

    Song Lizhong; Qi Haiming; Qiao Xiaolin; Meng Xiande

    2006-01-01

    A new kind of adaptive polarization filtering algorithm in order to suppress the angle cheating interference for the active guidance radar is presented. The polarization characteristic of the interference is dynamically tracked by using Kalman estimator under variable environments with time. The polarization filter parameters are designed according to the polarization characteristic of the interference, and the polarization filtering is finished in the target cell. The system scheme of adaptive polarization filter is studied and the tracking performance of polarization filter and improvement of angle measurement precision are simulated. The research results demonstrate this technology can effectively suppress the angle cheating interference in guidance radar and is feasible in engineering.

  7. Adaptive Federal Kalman Filtering for SINS/GPS Integrated System

    Institute of Scientific and Technical Information of China (English)

    杨勇; 缪玲娟

    2003-01-01

    A new adaptive federal Kalman filter for a strapdown integrated navigation system/global positioning system (SINS/GPS) is given. The developed federal Kalman filter is based on the trace operation of parameters estimation's error covariance matrix and the spectral radius of update measurement noise variance-covariance matrix for the proper choice of the filter weight and hence the filter gain factors. Theoretical analysis and results from simulation in which the SINS/GPS was compared to conventional Kalman filter are presented. Results show that the algorithm of this adaptive federal Kalman filter is simpler than that of the conventional one. Furthermore, it outperforms the conventional Kalman filter when the system is undertaken measurement malfunctions because of its possession of adaptive ability. This filter can be used in the vehicle integrated navigation system.

  8. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  9. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  10. Fast bilateral filtering of CT-images

    Energy Technology Data Exchange (ETDEWEB)

    Steckmann, Sven; Baer, Matthias; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    The Bilateral filter is able to get a lower noise level while retaining the edges in images. The downside of a bilateral filter is the high order of the problem itself. While having a Volume size of N with a dimension of d and a filter window of r the problem is of size N{sup d} . r{sup d}. In the literature there are some proposals for speeding up by reducing this order by approximating a component of the filter. This leads to inaccurate results which often implies non acceptable artifacts for medical imaging. A better way for medical imaging is to speed up the filter itself while leaving the basic structure intact. This is the way our implementation uses. We solve the problem of calculating the function of e{sup -x} in an efficient way on modern architectures, and the problem of vectorizing the filtering process. As result we implemented a filter which is 2.5 times faster than the highly optimized basic approach. By comparing the basic analytical approach with the final algorithm, the differences in quality of the computing process is negligible to the human eye. We are able to process a volume with 512{sup 3} voxels with a filter of 25 x 25 x 1 in 21 s on a modern Intel Xeon platform with two X5590 processors running at 3.33 GHz. (orig.)

  11. The research of radar target tracking observed information linear filter method

    Science.gov (United States)

    Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen

    2018-05-01

    Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.

  12. An Unbiased Unscented Transform Based Kalman Filter for 3D Radar

    Institute of Scientific and Technical Information of China (English)

    WANGGuohong; XIUJianjuan; HEYou

    2004-01-01

    As a derivative-free alternative to the Extended Kalman filter (EKF) in the framework of state estimation, the Unscented Kalman filter (UKF) has potential applications in nonlinear filtering. By noting the fact that the unscented transform is generally biased when converting the radar measurements from spherical coordinates into Cartesian coordinates, a new filtering algorithm for 3D radar, called Unbiased unscented Kalman filter (UUKF), is proposed. The new algorithm is validated by Monte Carlo simulation runs. Simulation results show that the UUKF is more effective than the UKF, EKF and the Converted measurement Kalman filter (CMKF).

  13. Intelligent medical information filtering.

    Science.gov (United States)

    Quintana, Y

    1998-01-01

    This paper describes an intelligent information filtering system to assist users to be notified of updates to new and relevant medical information. Among the major problems users face is the large volume of medical information that is generated each day, and the need to filter and retrieve relevant information. The Internet has dramatically increased the amount of electronically accessible medical information and reduced the cost and time needed to publish. The opportunity of the Internet for the medical profession and consumers is to have more information to make decisions and this could potentially lead to better medical decisions and outcomes. However, without the assistance from professional medical librarians, retrieving new and relevant information from databases and the Internet remains a challenge. Many physicians do not have access to the services of a medical librarian. Most physicians indicate on surveys that they do not prefer to retrieve the literature themselves, or visit libraries because of the lack of recent materials, poor organisation and indexing of materials, lack of appropriate and available material, and lack of time. The information filtering system described in this paper records the online web browsing behaviour of each user and creates a user profile of the index terms found on the web pages visited by the user. A relevance-ranking algorithm then matches the user profiles to the index terms of new health care web pages that are added each day. The system creates customised summaries of new information for each user. A user can then connect to the web site to read the new information. Relevance feedback buttons on each page ask the user to rate the usefulness of the page to their immediate information needs. Errors in relevance ranking are reduced in this system by having both the user profile and medical information represented in the same representation language using a controlled vocabulary. This system also updates the user profiles

  14. Impulsive noise removal from color video with morphological filtering

    Science.gov (United States)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  15. Magnetic resonance image enhancement using V-filter

    International Nuclear Information System (INIS)

    Yamamoto, H.; Sugita, K.; Kanzaki, N.; Johja, I.; Hiraki, Y.

    1990-01-01

    The purpose of this study is to present a method of boundary enhancement algorithms for magnetic resonance images using a V-filter. The boundary of the brain tumor was precisely extracted by the region segmentation techniques

  16. Model for optimising the execution of anti-spam filters

    Directory of Open Access Journals (Sweden)

    David Ruano-Ordás

    2016-12-01

    Full Text Available During last years, the combination of several filtering techniques for the development of anti-spam systems has gained a enormous popularity. However, although the accuracy achieved by these models has increased considerably, its use has entailed the emergence of new challenges such as the need to reduce the excessive use of computational resources, the increase of filtering speed and the adjustment of the weights used for the combination of several filtering techniques. In order to achieve this goal we have been refined several aspects including: (i the design and development of small technical improvements to increase the overall performance of the filter, (ii application of genetic algorithms to increase filtering accuracy and (iii the use of scheduling algorithms to improve filtering throughput.

  17. Laboratory for filter testing

    Energy Technology Data Exchange (ETDEWEB)

    Paluch, W.

    1987-07-01

    Filters used for mine draining in brown coal surface mines are tested by the Mine Draining Department of Poltegor. Laboratory tests of new types of filters developed by Poltegor are analyzed. Two types of tests are used: tests of scale filter models and tests of experimental units of new filters. Design and operation of the test stands used for testing mechanical properties and hydraulic properties of filters for coal mines are described: dimensions, pressure fluctuations, hydraulic equipment. Examples of testing large-diameter filters for brown coal mines are discussed.

  18. Orchard navigation using derivative free Kalman filtering

    DEFF Research Database (Denmark)

    Hansen, Søren; Bayramoglu, Enis; Andersen, Jens Christian

    2011-01-01

    This paper describes the use of derivative free filters for mobile robot localization and navigation in an orchard. The localization algorithm fuses odometry and gyro measurements with line features representing the surrounding fruit trees of the orchard. The line features are created on basis of 2...

  19. Back-Projection Imaging of extended, high-frequency pre-, co-, and post-eruptive seismicity at El Jefe Geyser, El Tatio Geyser Field, Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.; Beroza, G. C.

    2017-12-01

    El Tatio Geyser Field in northern Chile is the third largest geyser field in the world. It is comprised of 3 basins that span 10 km x 10 km at an average elevation of 4250 m and contains at least 80 active geysers. Heavy tourist traffic and previous geothermal exploration make the field relatively non-pristine and ideal for performing minimally invasive geophysical experiments. We deployed a dense array of 51 L-28 3-component geophones (1-10 m spacing, corner frequency 4.5 Hz, 1000 Hz sample rate), and 6 Trillium 120 broadband seismometers (2-20 m spacing, long period corner 120 s, 500 Hz sample rate) in a 50 m x 50 m grid in the central Upper Geyser Basin (the largest basin in area at 5 km x 5 km) during October 2012 as part of a collaborative study of hydrothermal systems between Stanford University; U.C. Berkeley; U. of Chile, Santiago; U. of Tokyo; and the USGS. The seismic array was designed to target at El Jefe Geyser (EJG), a columnar geyser (eruption height 1-1.5 m) with a consistent periodic eruption cycle of 132 +/- 3 s. Seismicity at EJG was recorded continuously for 9 days during which 6000 total eruptions occurred. Excluding periods of high anthropogenic noise (i.e. tourist visits, field work), the array recorded 2000 eruptions that we use to create 4D time-lapse images of the evolution of seismic source locations before, during and after EJG eruptions. We use a new back-projection processing technique to locate geyser signals, which tend to be harmonic and diffuse in nature, during characteristic phases of the EJG eruption cycle. We obtain Vp and Vs from ambient-field tomography and estimates of P and S propagation from a hammer source recorded by the array. We use these velocities to back-project and correlate seismic signals from all available receiver-pairs to all potential source locations in a subsurface model assuming straight-line raypaths. We analyze results for individual and concurrent geyser sources throughout an entire EJG eruption cycle

  20. Sensory Pollution from Bag Filters, Carbon Filters and Combinations

    DEFF Research Database (Denmark)

    Bekö, Gabriel; Clausen, Geo; Weschler, Charles J.

    2008-01-01

    by an upstream pre-filter (changed monthly), an EU7 filter protected by an upstream activated carbon (AC) filter, and EU7 filters with an AC filter either downstream or both upstream and downstream. In addition, two types of stand-alone combination filters were evaluated: a bag-type fiberglass filter...... that contained AC and a synthetic fiber cartridge filter that contained AC. Air that had passed through used filters was most acceptable for those sets in which an AC filter was used downstream of the particle filter. Comparable air quality was achieved with the stand-alone bag filter that contained AC...

  1. HEPA Filter Vulnerability Assessment

    International Nuclear Information System (INIS)

    GUSTAVSON, R.D.

    2000-01-01

    This assessment of High Efficiency Particulate Air (HEPA) filter vulnerability was requested by the USDOE Office of River Protection (ORP) to satisfy a DOE-HQ directive to evaluate the effect of filter degradation on the facility authorization basis assumptions. Within the scope of this assessment are ventilation system HEPA filters that are classified as Safety-Class (SC) or Safety-Significant (SS) components that perform an accident mitigation function. The objective of the assessment is to verify whether HEPA filters that perform a safety function during an accident are likely to perform as intended to limit release of hazardous or radioactive materials, considering factors that could degrade the filters. Filter degradation factors considered include aging, wetting of filters, exposure to high temperature, exposure to corrosive or reactive chemicals, and exposure to radiation. Screening and evaluation criteria were developed by a site-wide group of HVAC engineers and HEPA filter experts from published empirical data. For River Protection Project (RPP) filters, the only degradation factor that exceeded the screening threshold was for filter aging. Subsequent evaluation of the effect of filter aging on the filter strength was conducted, and the results were compared with required performance to meet the conditions assumed in the RPP Authorization Basis (AB). It was found that the reduction in filter strength due to aging does not affect the filter performance requirements as specified in the AB. A portion of the HEPA filter vulnerability assessment is being conducted by the ORP and is not part of the scope of this study. The ORP is conducting an assessment of the existing policies and programs relating to maintenance, testing, and change-out of HEPA filters used for SC/SS service. This document presents the results of a HEPA filter vulnerability assessment conducted for the River protection project as requested by the DOE Office of River Protection

  2. Digital, realizable Wiener filtering in two-dimensions

    International Nuclear Information System (INIS)

    Ekstrom, M.P.

    1979-01-01

    The extension of Wiener's classical mean-square filtering theory to the estimation of two-dimensional (2-D), discrete random fields is discussed. In analogy with the 1-D case, the optimal realizable filter is derived by solution of a 2-D discrete Wiener--Hopf equation using a spectral factorization procedure. Computational algorithms for performing the required calculations are discussed. 3 figures

  3. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  4. Bias aware Kalman filters

    DEFF Research Database (Denmark)

    Drecourt, J.-P.; Madsen, H.; Rosbjerg, Dan

    2006-01-01

    This paper reviews two different approaches that have been proposed to tackle the problems of model bias with the Kalman filter: the use of a colored noise model and the implementation of a separate bias filter. Both filters are implemented with and without feedback of the bias into the model state....... The colored noise filter formulation is extended to correct both time correlated and uncorrelated model error components. A more stable version of the separate filter without feedback is presented. The filters are implemented in an ensemble framework using Latin hypercube sampling. The techniques...... are illustrated on a simple one-dimensional groundwater problem. The results show that the presented filters outperform the standard Kalman filter and that the implementations with bias feedback work in more general conditions than the implementations without feedback. 2005 Elsevier Ltd. All rights reserved....

  5. Simon-nitinol filter

    International Nuclear Information System (INIS)

    Simon, M.; Kim, D.; Porter, D.H.; Kleshinski, S.

    1989-01-01

    This paper discusses a filter that exploits the thermal shape-memory properties of the nitinol alloy to achieve an optimized filter shape and a fine-bore introducer. Experimental methods and materials are given and results are analyzed

  6. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    Science.gov (United States)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  7. MST Filterability Tests

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Burket, P. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Duignan, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  8. A comparative study of Kalman filter and Linear Matrix Inequality based H infinity filter for SPND delay compensation

    International Nuclear Information System (INIS)

    Tamboli, P.K.; Duttagupta, Siddhartha P.; Roy, Kallol

    2016-01-01

    Highlights: • Derivation for delay compensation algorithm using recursive Kalman filter. • Derivation for delay compensation algorithm using Linear Matrix Inequality based H infinity filter. • Process modeling suitable for delay compensation. • Dynamic tuning of the delay compensation algorithm for both Kalman and H infinity filter. • Simulations and trade-off curve for Kalman and H infinity filter. - Abstract: This paper deals with delay compensation of vanadium Self Powered Neutron Detectors (SPNDs) using Linear Matrix Inequality (LMI) based H-infinity filtering method and compares the results with Kalman filtering method. The entire study is established upon the framework of neutron flux estimation in large core Pressurized Heavy Water Reactor (PHWR) in which delayed SPNDs such as vanadium SPNDs are used as in-core flux monitoring detectors. The use of vanadium SPNDs are limited to 3-D flux mapping despite of providing better Signal to Noise Ratio as compared to other prompt SPNDs, due to their small prompt component in the signal. The use of an appropriate delay compensation technique has been always considered to be an effective strategy to build a prompt and accurate estimate of the neutron flux. We also indicate the noise-response trade-off curve for both the techniques. Since all the delay compensation algorithms always suffer from noise amplification, we propose an efficient adaptive parameter tuning technique for improving performance of the filtering algorithm against noise in the measurement.

  9. Optimal design of active EMC filters

    Science.gov (United States)

    Chand, B.; Kut, T.; Dickmann, S.

    2013-07-01

    A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

  10. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  11. Filtering in Hybrid Dynamic Bayesian Networks

    Science.gov (United States)

    Andersen, Morten Nonboe; Andersen, Rasmus Orum; Wheeler, Kevin

    2000-01-01

    We implement a 2-time slice dynamic Bayesian network (2T-DBN) framework and make a 1-D state estimation simulation, an extension of the experiment in (v.d. Merwe et al., 2000) and compare different filtering techniques. Furthermore, we demonstrate experimentally that inference in a complex hybrid DBN is possible by simulating fault detection in a watertank system, an extension of the experiment in (Koller & Lerner, 2000) using a hybrid 2T-DBN. In both experiments, we perform approximate inference using standard filtering techniques, Monte Carlo methods and combinations of these. In the watertank simulation, we also demonstrate the use of 'non-strict' Rao-Blackwellisation. We show that the unscented Kalman filter (UKF) and UKF in a particle filtering framework outperform the generic particle filter, the extended Kalman filter (EKF) and EKF in a particle filtering framework with respect to accuracy in terms of estimation RMSE and sensitivity with respect to choice of network structure. Especially we demonstrate the superiority of UKF in a PF framework when our beliefs of how data was generated are wrong. Furthermore, we investigate the influence of data noise in the watertank simulation using UKF and PFUKD and show that the algorithms are more sensitive to changes in the measurement noise level that the process noise level. Theory and implementation is based on (v.d. Merwe et al., 2000).

  12. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  13. Preconditioning Filter Bank Decomposition Using Structured Normalized Tight Frames

    Directory of Open Access Journals (Sweden)

    Martin Ehler

    2015-01-01

    Full Text Available We turn a given filter bank into a filtering scheme that provides perfect reconstruction, synthesis is the adjoint of the analysis part (so-called unitary filter banks, all filters have equal norm, and the essential features of the original filter bank are preserved. Unitary filter banks providing perfect reconstruction are induced by tight generalized frames, which enable signal decomposition using a set of linear operators. If, in addition, frame elements have equal norm, then the signal energy is spread through the various filter bank channels in some uniform fashion, which is often more suitable for further signal processing. We start with a given generalized frame whose elements allow for fast matrix vector multiplication, as, for instance, convolution operators, and compute a normalized tight frame, for which signal analysis and synthesis still preserve those fast algorithmic schemes.

  14. Rotationally invariant correlation filtering

    International Nuclear Information System (INIS)

    Schils, G.F.; Sweeney, D.W.

    1985-01-01

    A method is presented for analyzing and designing optical correlation filters that have tailored rotational invariance properties. The concept of a correlation of an image with a rotation of itself is introduced. A unified theory of rotation-invariant filtering is then formulated. The unified approach describes matched filters (with no rotation invariance) and circular-harmonic filters (with full rotation invariance) as special cases. The continuum of intermediate cases is described in terms of a cyclic convolution operation over angle. The angular filtering approach allows an exact choice for the continuous trade-off between loss of the correlation energy (or specificity regarding the image) and the amount of rotational invariance desired

  15. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. 3: A stochastic rain fade control algorithm for satellite link power via non linear Markow filtering theory

    Science.gov (United States)

    Manning, Robert M.

    1991-01-01

    The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.

  16. Computation of nuclear reactor parameters using a stretch Kalman filtering

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Poujol, A.

    1976-01-01

    A method of nonlinear stochastic filtering, the stretched Karman filter, is used for the estimation of two basic parameters involved in the control of nuclear reactor start-up. The corresponding algorithm is stored in a small Multi-8 computer and tested with data recorded for the Ulysse reactor (I.N.S.T.N.). The various practical problems involved in using the algorithm are examined: filtering initialization, influence of the model... The quality and time saving obtained in the computation make it possible for a real time operation, the computer being connected with the reactor [fr

  17. Retina-Inspired Filter.

    Science.gov (United States)

    Doutsi, Effrosyni; Fillatre, Lionel; Antonini, Marc; Gaulmin, Julien

    2018-07-01

    This paper introduces a novel filter, which is inspired by the human retina. The human retina consists of three different layers: the Outer Plexiform Layer (OPL), the inner plexiform layer, and the ganglionic layer. Our inspiration is the linear transform which takes place in the OPL and has been mathematically described by the neuroscientific model "virtual retina." This model is the cornerstone to derive the non-separable spatio-temporal OPL retina-inspired filter, briefly renamed retina-inspired filter, studied in this paper. This filter is connected to the dynamic behavior of the retina, which enables the retina to increase the sharpness of the visual stimulus during filtering before its transmission to the brain. We establish that this retina-inspired transform forms a group of spatio-temporal Weighted Difference of Gaussian (WDoG) filters when it is applied to a still image visible for a given time. We analyze the spatial frequency bandwidth of the retina-inspired filter with respect to time. It is shown that the WDoG spectrum varies from a lowpass filter to a bandpass filter. Therefore, while time increases, the retina-inspired filter enables to extract different kinds of information from the input image. Finally, we discuss the benefits of using the retina-inspired filter in image processing applications such as edge detection and compression.

  18. Study of different filters

    International Nuclear Information System (INIS)

    Cochinal, R.; Rouby, R.

    1959-01-01

    This note first contains a terminology related to filters and to their operation, and then proposes an overview of general characteristics of filters such as load loss with respect to gas rate, efficiency, and clogging with respect to filter pollution. It also indicates standard aerosols which are generally used, how they are dosed, and how efficiency is determined with a standard aerosol. Then, after a presentation of the filtration principle, this note reports the study of several filters: glass wool, filter papers provided by different companies, Teflon foam, English filters, Teflon wool, sintered Teflonite, quartz wool, polyvinyl chloride foam, synthetic filter, sintered bronze. The third part reports the study of some aerosol and dust separators

  19. Changing ventilation filters

    International Nuclear Information System (INIS)

    Hackney, S.

    1980-01-01

    A filter changing unit has a door which interlocks with the door of a filter chamber so as to prevent contamination of the outer surfaces of the doors by radioactive material collected on the filter element and a movable support which enables a filter chamber thereonto to be stored within the unit in such a way that the doors of the unit and the filter chamber can be replaced. The door pivots and interlocks with another door by means of a bolt, a seal around the periphery lip of the first door engages the periphery of the second door to seal the gap. A support pivots into a lower filter element storage position. Inspection windows and glove ports are provided. The unit is releasably connected to the filter chamber by bolts engaging in a flange provided around an opening. (author)

  20. Balanced microwave filters

    CERN Document Server

    Hong, Jiasheng; Medina, Francisco; Martiacuten, Ferran

    2018-01-01

    This book presents and discusses strategies for the design and implementation of common-mode suppressed balanced microwave filters, including, narrowband, wideband, and ultra-wideband filters This book examines differential-mode, or balanced, microwave filters by discussing several implementations of practical realizations of these passive components. Topics covered include selective mode suppression, designs based on distributed and semi-lumped approaches, multilayer technologies, defect ground structures, coupled resonators, metamaterials, interference techniques, and substrate integrated waveguides, among others. Divided into five parts, Balanced Microwave Filters begins with an introduction that presents the fundamentals of balanced lines, circuits, and networks. Part 2 covers balanced transmission lines with common-mode noise suppression, including several types of common-mode filters and the application of such filters to enhance common-mode suppression in balanced bandpass filters. Next, Part 3 exa...