WorldWideScience

Sample records for iterative algorithm based

  1. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  2. An iterative algorithm for fuzzy mixed production planning based on the cumulative membership function

    Directory of Open Access Journals (Sweden)

    Juan Carlos Figueroa García

    2011-12-01

    The presented approach uses an iterative algorithm which finds stable solutions to problems with fuzzy parameter sinboth sides of an FLP problem. The algorithm is based on the soft constraints method proposed by Zimmermann combined with an iterative procedure which gets a single optimal solution.

  3. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  4. Iterative Observer-based Estimation Algorithms for Steady-State Elliptic Partial Differential Equation Systems

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-19

    Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.

  5. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  6. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  7. Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter

    Directory of Open Access Journals (Sweden)

    Xiaoxi Yan

    2014-01-01

    Full Text Available As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

  8. Iterative group splitting algorithm for opportunistic scheduling systems

    KAUST Repository

    Nam, Haewoon; Alouini, Mohamed-Slim

    2014-01-01

    An efficient feedback algorithm for opportunistic scheduling systems based on iterative group splitting is proposed in this paper. Similar to the opportunistic splitting algorithm, the proposed algorithm adjusts (or lowers) the feedback threshold

  9. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  10. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  11. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  12. A fast method to emulate an iterative POCS image reconstruction algorithm.

    Science.gov (United States)

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  13. A New Pose Estimation Algorithm Using a Perspective-Ray-Based Scaled Orthographic Projection with Iteration.

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.

  14. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    Science.gov (United States)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  15. Iterative group splitting algorithm for opportunistic scheduling systems

    KAUST Repository

    Nam, Haewoon

    2014-05-01

    An efficient feedback algorithm for opportunistic scheduling systems based on iterative group splitting is proposed in this paper. Similar to the opportunistic splitting algorithm, the proposed algorithm adjusts (or lowers) the feedback threshold during a guard period if no user sends a feedback. However, when a feedback collision occurs at any point of time, the proposed algorithm no longer updates the threshold but narrows down the user search space by dividing the users into multiple groups iteratively, whereas the opportunistic splitting algorithm keeps adjusting the threshold until a single user is found. Since the threshold is only updated when no user sends a feedback, it is shown that the proposed algorithm significantly alleviates the signaling overhead for the threshold distribution to the users by the scheduler. More importantly, the proposed algorithm requires a less number of mini-slots than the opportunistic splitting algorithm to make a user selection with a given level of scheduling outage probability or provides a higher ergodic capacity given a certain number of mini-slots. © 2013 IEEE.

  16. Hybrid iterative phase retrieval algorithm based on fusion of intensity information in three defocused planes.

    Science.gov (United States)

    Zeng, Fa; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan

    2007-10-01

    Study of phase retrieval technology is quite meaningful, for its wide applications related to many domains, such as adaptive optics, detection of laser quality, precise measurement of optical surface, and so on. Here a hybrid iterative phase retrieval algorithm is proposed, based on fusion of the intensity information in three defocused planes. First the conjugate gradient algorithm is adapted to achieve a coarse solution of phase distribution in the input plane; then the iterative angular spectrum method is applied in succession for better retrieval result. This algorithm is still applicable even when the exact shape and size of the aperture in the input plane are unknown. Moreover, this algorithm always exhibits good convergence, i.e., the retrieved results are insensitive to the chosen positions of the three defocused planes and the initial guess of complex amplitude in the input plane, which has been proved by both simulations and further experiments.

  17. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  18. Retinal biometrics based on Iterative Closest Point algorithm.

    Science.gov (United States)

    Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi

    2017-07-01

    The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.

  19. Feasibility study of the iterative x-ray phase retrieval algorithm

    International Nuclear Information System (INIS)

    Meng Fanbo; Liu Hong; Wu Xizeng

    2009-01-01

    An iterative phase retrieval algorithm was previously investigated for in-line x-ray phase imaging. Through detailed theoretical analysis and computer simulations, we now discuss the limitations, robustness, and efficiency of the algorithm. The iterative algorithm was proved robust against imaging noise but sensitive to the variations of several system parameters. It is also efficient in terms of calculation time. It was shown that the algorithm can be applied to phase retrieval based on one phase-contrast image and one attenuation image, or two phase-contrast images; in both cases, the two images can be obtained either by one detector in two exposures, or by two detectors in only one exposure as in the dual-detector scheme

  20. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  1. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  2. Perturbation resilience and superiorization of iterative algorithms

    International Nuclear Information System (INIS)

    Censor, Y; Davidi, R; Herman, G T

    2010-01-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image

  3. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  4. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  5. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    Science.gov (United States)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  6. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  7. The Fixpoint-Iteration Algorithm for Parity Games

    Directory of Open Access Journals (Sweden)

    Florian Bruse

    2014-08-01

    Full Text Available It is known that the model checking problem for the modal mu-calculus reduces to the problem of solving a parity game and vice-versa. The latter is realised by the Walukiewicz formulas which are satisfied by a node in a parity game iff player 0 wins the game from this node. Thus, they define her winning region, and any model checking algorithm for the modal mu-calculus, suitably specialised to the Walukiewicz formulas, yields an algorithm for solving parity games. In this paper we study the effect of employing the most straight-forward mu-calculus model checking algorithm: fixpoint iteration. This is also one of the few algorithms, if not the only one, that were not originally devised for parity game solving already. While an empirical study quickly shows that this does not yield an algorithm that works well in practice, it is interesting from a theoretical point for two reasons: first, it is exponential on virtually all families of games that were designed as lower bounds for very particular algorithms suggesting that fixpoint iteration is connected to all those. Second, fixpoint iteration does not compute positional winning strategies. Note that the Walukiewicz formulas only define winning regions; some additional work is needed in order to make this algorithm compute winning strategies. We show that these are particular exponential-space strategies which we call eventually-positional, and we show how positional ones can be extracted from them.

  8. Introduction: a brief overview of iterative algorithms in X-ray computed tomography.

    Science.gov (United States)

    Soleimani, M; Pengpen, T

    2015-06-13

    This paper presents a brief overview of some basic iterative algorithms, and more sophisticated methods are presented in the research papers in this issue. A range of algebraic iterative algorithms are covered here including ART, SART and OS-SART. A major limitation of the traditional iterative methods is their computational time. The Krylov subspace based methods such as the conjugate gradients (CG) algorithm and its variants can be used to solve linear systems of equations arising from large-scale CT with possible implementation using modern high-performance computing tools. The overall aim of this theme issue is to stimulate international efforts to develop the next generation of X-ray computed tomography (CT) image reconstruction software. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. The irace package: Iterated racing for automatic algorithm configuration

    Directory of Open Access Journals (Sweden)

    Manuel López-Ibáñez

    2016-01-01

    Full Text Available Modern optimization algorithms typically require the setting of a large number of parameters to optimize their performance. The immediate goal of automatic algorithm configuration is to find, automatically, the best parameter settings of an optimizer. Ultimately, automatic algorithm configuration has the potential to lead to new design paradigms for optimization software. The irace package is a software package that implements a number of automatic configuration procedures. In particular, it offers iterated racing procedures, which have been used successfully to automatically configure various state-of-the-art algorithms. The iterated racing procedures implemented in irace include the iterated F-race algorithm and several extensions and improvements over it. In this paper, we describe the rationale underlying the iterated racing procedures and introduce a number of recent extensions. Among these, we introduce a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances. We experimentally evaluate the most recent version of irace and demonstrate with a number of example applications the use and potential of irace, in particular, and automatic algorithm configuration, in general.

  10. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    Science.gov (United States)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  11. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    Science.gov (United States)

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2017-10-01

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  12. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    Science.gov (United States)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  13. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    Science.gov (United States)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  14. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  15. Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate

    Directory of Open Access Journals (Sweden)

    We-Duke Cho

    2008-09-01

    Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

  16. Jointly-check iterative decoding algorithm for quantum sparse graph codes

    International Nuclear Information System (INIS)

    Jun-Hu, Shao; Bao-Ming, Bai; Wei, Lin; Lin, Zhou

    2010-01-01

    For quantum sparse graph codes with stabilizer formalism, the unavoidable girth-four cycles in their Tanner graphs greatly degrade the iterative decoding performance with a standard belief-propagation (BP) algorithm. In this paper, we present a jointly-check iterative algorithm suitable for decoding quantum sparse graph codes efficiently. Numerical simulations show that this modified method outperforms the standard BP algorithm with an obvious performance improvement. (general)

  17. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  18. Subroutine MLTGRD: a multigrid algorithm based on multiplicative correction and implicit non-stationary iteration

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.

    1986-11-01

    A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels

  19. Noise propagation in iterative reconstruction algorithms with line searches

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced

  20. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  1. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    Science.gov (United States)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  2. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization s...

  3. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    Science.gov (United States)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  4. Regularization iteration imaging algorithm for electrical capacitance tomography

    Science.gov (United States)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  5. A faster ordered-subset convex algorithm for iterative reconstruction in a rotation-free micro-CT system

    International Nuclear Information System (INIS)

    Quan, E; Lalush, D S

    2009-01-01

    We present a faster iterative reconstruction algorithm based on the ordered-subset convex (OSC) algorithm for transmission CT. The OSC algorithm was modified such that it calculates the normalization term before the iterative process in order to save computational cost. The modified version requires only one backprojection per iteration as compared to two required for the original OSC. We applied the modified OSC (MOSC) algorithm to a rotation-free micro-CT system that we proposed previously, observed its performance, and compared with the OSC algorithm for 3D cone-beam reconstruction. Measurements on the reconstructed images as well as the point spread functions show that MOSC is quite similar to OSC; in noise-resolution trade-off, MOSC is comparable with OSC in a regular-noise situation and it is slightly worse than OSC in an extremely high-noise situation. The timing record shows that MOSC saves 25-30% CPU time, depending on the number of iterations used. We conclude that the MOSC algorithm is more efficient than OSC and provides comparable images.

  6. Iterative channel decoding of FEC-based multiple-description codes.

    Science.gov (United States)

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  7. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  8. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  9. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    Science.gov (United States)

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  10. Improved Iterative Hard- and Soft-Reliability Based Majority-Logic Decoding Algorithms for Non-Binary Low-Density Parity-Check Codes

    Science.gov (United States)

    Xiong, Chenrong; Yan, Zhiyuan

    2014-10-01

    Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. In this paper, we propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the existing majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selecting intermediate hard decisions, and changing reliability information.

  11. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    Science.gov (United States)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  12. An efficient iterative grand canonical Monte Carlo algorithm to determine individual ionic chemical potentials in electrolytes.

    Science.gov (United States)

    Malasics, Attila; Boda, Dezso

    2010-06-28

    Two iterative procedures have been proposed recently to calculate the chemical potentials corresponding to prescribed concentrations from grand canonical Monte Carlo (GCMC) simulations. Both are based on repeated GCMC simulations with updated excess chemical potentials until the desired concentrations are established. In this paper, we propose combining our robust and fast converging iteration algorithm [Malasics, Gillespie, and Boda, J. Chem. Phys. 128, 124102 (2008)] with the suggestion of Lamperski [Mol. Simul. 33, 1193 (2007)] to average the chemical potentials in the iterations (instead of just using the chemical potentials obtained in the last iteration). We apply the unified method for various electrolyte solutions and show that our algorithm is more efficient if we use the averaging procedure. We discuss the convergence problems arising from violation of charge neutrality when inserting/deleting individual ions instead of neutral groups of ions (salts). We suggest a correction term to the iteration procedure that makes the algorithm efficient to determine the chemical potentials of individual ions too.

  13. Performance of direct and iterative algorithms on an optical systolic processor

    Science.gov (United States)

    Ghosh, A. K.; Casasent, D.; Neuman, C. P.

    1985-11-01

    The frequency-multiplexed optical linear algebra processor (OLAP) is treated in detail with attention to its performance in the solution of systems of linear algebraic equations (LAEs). General guidelines suitable for most OLAPs, including digital-optical processors, are advanced concerning system and component error source models, guidelines for appropriate use of direct and iterative algorithms, the dominant error sources, and the effect of multiple simultaneous error sources. Specific results are advanced on the quantitative performance of both direct and iterative algorithms in the solution of systems of LAEs and in the solution of nonlinear matrix equations. Acoustic attenuation is found to dominate iterative algorithms and detector noise to dominate direct algorithms. The effect of multiple spatial errors is found to be additive. A theoretical expression for the amount of acoustic attenuation allowed is advanced and verified. Simulations and experimental data are included.

  14. On the Convergence of Iterative Receiver Algorithms Utilizing Hard Decisions

    Directory of Open Access Journals (Sweden)

    Jürgen F. Rößler

    2009-01-01

    Full Text Available The convergence of receivers performing iterative hard decision interference cancellation (IHDIC is analyzed in a general framework for ASK, PSK, and QAM constellations. We first give an overview of IHDIC algorithms known from the literature applied to linear modulation and DS-CDMA-based transmission systems and show the relation to Hopfield neural network theory. It is proven analytically that IHDIC with serial update scheme always converges to a stable state in the estimated values in course of iterations and that IHDIC with parallel update scheme converges to cycles of length 2. Additionally, we visualize the convergence behavior with the aid of convergence charts. Doing so, we give insight into possible errors occurring in IHDIC which turn out to be caused by locked error situations. The derived results can directly be applied to those iterative soft decision interference cancellation (ISDIC receivers whose soft decision functions approach hard decision functions in course of the iterations.

  15. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based

  16. Simulating prescribed particle densities in the grand canonical ensemble using iterative algorithms.

    Science.gov (United States)

    Malasics, Attila; Gillespie, Dirk; Boda, Dezso

    2008-03-28

    We present two efficient iterative Monte Carlo algorithms in the grand canonical ensemble with which the chemical potentials corresponding to prescribed (targeted) partial densities can be determined. The first algorithm works by always using the targeted densities in the kT log(rho(i)) (ideal gas) terms and updating the excess chemical potentials from the previous iteration. The second algorithm extrapolates the chemical potentials in the next iteration from the results of the previous iteration using a first order series expansion of the densities. The coefficients of the series, the derivatives of the densities with respect to the chemical potentials, are obtained from the simulations by fluctuation formulas. The convergence of this procedure is shown for the examples of a homogeneous Lennard-Jones mixture and a NaCl-CaCl(2) electrolyte mixture in the primitive model. The methods are quite robust under the conditions investigated. The first algorithm is less sensitive to initial conditions.

  17. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  18. Mixed error compensation in a heterodyne interferometer using the iterated dual-EKF algorithm

    International Nuclear Information System (INIS)

    Lee, Woo Ram; Kim, Chang Rai; You, Kwan Ho

    2010-01-01

    The heterodyne laser interferometer has been widely used in the field of precise measurements. The limited measurement accuracy of a heterodyne laser interferometer arises from the periodic nonlinearity caused by non-ideal laser sources and imperfect optical components. In this paper, the iterated dual-EKF algorithm is used to compensate for the error caused by nonlinearity and external noise. With the iterated dual-EKF algorithm, the weight filter estimates the parameter uncertainties in the state equation caused by nonlinearity errors and has a high convergence rate of weight values due to the iteration process. To verify the performance of the proposed compensation algorithm, we present experimental results obtained by using the iterated dual-EKF algorithm and compare them with the results obtained by using a capacitance displacement sensor.

  19. Mixed error compensation in a heterodyne interferometer using the iterated dual-EKF algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woo Ram; Kim, Chang Rai; You, Kwan Ho [Sungkyunkwan University, Suwon (Korea, Republic of)

    2010-10-15

    The heterodyne laser interferometer has been widely used in the field of precise measurements. The limited measurement accuracy of a heterodyne laser interferometer arises from the periodic nonlinearity caused by non-ideal laser sources and imperfect optical components. In this paper, the iterated dual-EKF algorithm is used to compensate for the error caused by nonlinearity and external noise. With the iterated dual-EKF algorithm, the weight filter estimates the parameter uncertainties in the state equation caused by nonlinearity errors and has a high convergence rate of weight values due to the iteration process. To verify the performance of the proposed compensation algorithm, we present experimental results obtained by using the iterated dual-EKF algorithm and compare them with the results obtained by using a capacitance displacement sensor.

  20. An optimal iterative algorithm to solve Cauchy problem for Laplace equation

    KAUST Repository

    Majeed, Muhammad Usman

    2015-05-25

    An optimal mean square error minimizer algorithm is developed to solve severely ill-posed Cauchy problem for Laplace equation on an annulus domain. The mathematical problem is presented as a first order state space-like system and an optimal iterative algorithm is developed that minimizes the mean square error in states. Finite difference discretization schemes are used to discretize first order system. After numerical discretization algorithm equations are derived taking inspiration from Kalman filter however using one of the space variables as a time-like variable. Given Dirichlet and Neumann boundary conditions are used on the Cauchy data boundary and fictitious points are introduced on the unknown solution boundary. The algorithm is run for a number of iterations using the solution of previous iteration as a guess for the next one. The method developed happens to be highly robust to noise in Cauchy data and numerically efficient results are illustrated.

  1. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  2. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  3. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  4. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  5. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  6. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Miltersen, Peter Bro; Zwick, Uri

    2011-01-01

    Ye showed recently that the simplex method with Dantzig pivoting rule, as well as Howard's policy iteration algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate after...... iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard's policy iteration algorithm used for solving 2-player turn-based stochastic games with discounted zero...

  7. Strategy Iteration Is Strongly Polynomial for 2-Player Turn-Based Stochastic Games with a Constant Discount Factor

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Miltersen, Peter Bro; Zwick, Uri

    2013-01-01

    Ye [2011] showed recently that the simplex method with Dantzig’s pivoting rule, as well as Howard’s policy iteration algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate...... terminates after at most O(m1−γ log n1−γ) iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard’s policy iteration algorithm used for solving 2-player turn-based...... for 2-player turn-based stochastic games; it is strongly polynomial for a fixed discount factor, and exponential otherwise....

  8. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian; Ghanem, Bernard

    2017-01-01

    and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially

  9. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  10. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  11. An optimal iterative algorithm to solve Cauchy problem for Laplace equation

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2015-01-01

    iterative algorithm is developed that minimizes the mean square error in states. Finite difference discretization schemes are used to discretize first order system. After numerical discretization algorithm equations are derived taking inspiration from Kalman

  12. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    Science.gov (United States)

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Iterating skeletons

    DEFF Research Database (Denmark)

    Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost

    2012-01-01

    a particular skeleton ad-hoc for repeated execution turns out to be considerably complicated, and raises general questions about introducing state into a stateless parallel computation. In addition, one would strongly prefer an approach which leaves the original skeleton intact, and only uses it as a building...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...

  14. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Science.gov (United States)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  15. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  16. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  17. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Directory of Open Access Journals (Sweden)

    Statovci Driton

    2006-01-01

    Full Text Available We present a practical solution for dynamic spectrum management (DSM in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA. Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA, and the bidirectional IWFA (bi-IWFA. We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  18. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  19. Strong and Weak Convergence Criteria of Composite Iterative Algorithms for Systems of Generalized Equilibria

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one iterative algorithm by using the composite shrinking projection method for finding a solution of the system of generalized equilibria with constraints of several problems: a generalized mixed equilibrium problem, finitely many variational inequalities, and the common fixed point problem of an asymptotically strict pseudocontractive mapping in the intermediate sense and infinitely many nonexpansive mappings in a real Hilbert space. We prove a strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another iterative algorithm involving no shrinking projection method and derive its weak convergence under mild assumptions. Our results improve and extend the corresponding results in the earlier and recent literature.

  20. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    Science.gov (United States)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  1. Inertial measurement unit–based iterative pose compensation algorithm for low-cost modular manipulator

    Directory of Open Access Journals (Sweden)

    Yunhan Lin

    2016-01-01

    Full Text Available It is a necessary mean to realize the accurate motion control of the manipulator which uses end-effector pose correction method and compensation method. In this article, first, we established the kinematic model and error model of the modular manipulator (WUST-ARM, and then we discussed the measurement methods and precision of the inertial measurement unit sensor. The inertial measurement unit sensor is mounted on the end-effector of modular manipulator, to get the real-time pose of the end-effector. At last, a new inertial measurement unit–based iterative pose compensation algorithm is proposed. By applying this algorithm in the pose compensation experiment of modular manipulator which is composed of low-cost rotation joints, the results show that the inertial measurement unit can obtain a higher precision when in static state; it will accurately feedback to the control system with an accurate error compensation angle after a brief delay when the end-effector moves to the target point, and after compensation, the precision errors of roll angle, pitch angle, and yaw angle are reached at 0.05°, 0.01°, and 0.27° respectively. It proves that this low-cost method provides a new solution to improve the end-effector pose of low-cost modular manipulator.

  2. Improved image quality with simultaneously reduced radiation exposure: Knowledge-based iterative model reconstruction algorithms for coronary CT angiography in a clinical setting.

    Science.gov (United States)

    André, Florian; Fortner, Philipp; Vembar, Mani; Mueller, Dirk; Stiller, Wolfram; Buss, Sebastian J; Kauczor, Hans-Ulrich; Katus, Hugo A; Korosoglou, Grigorios

    The aim of this study was to assess the potential for radiation dose reduction using knowledge-based iterative model reconstruction (K-IMR) algorithms in combination with ultra-low dose body mass index (BMI)-adapted protocols in coronary CT angiography (coronary CTA). Forty patients undergoing clinically indicated coronary CTA were randomly assigned to two groups with BMI-adapted (I: quality was significantly better in the ULD group using K-IMR CR 1 compared to FBP, iD 2 and iD 5 in the LD group, resulting in fewer non-diagnostic coronary segments (2.4% vs. 11.6%, 9.2% and 6.1%; p quality compared to LD protocols with FBP or hybrid iterative algorithms. Therefore, K-IMR allows for coronary CTA examinations with high diagnostic value and very low radiation exposure in clinical routine. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  3. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)

    2017-10-15

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  4. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    International Nuclear Information System (INIS)

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu; Ino, Kenji; Torigoe, Rumiko

    2017-01-01

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  5. Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques

    NARCIS (Netherlands)

    Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.

    2015-01-01

    AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were

  6. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    Science.gov (United States)

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  7. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    Science.gov (United States)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  8. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  9. Iterative volume morphing and learning for mobile tumor based on 4DCT.

    Science.gov (United States)

    Mao, Songan; Wu, Huanmei; Sandison, George; Fang, Shiaofen

    2017-02-21

    During image-guided cancer radiation treatment, three-dimensional (3D) tumor volumetric information is important for treatment success. However, it is typically not feasible to image a patient's 3D tumor continuously in real time during treatment due to concern over excessive patient radiation dose. We present a new iterative morphing algorithm to predict the real-time 3D tumor volume based on time-resolved computed tomography (4DCT) acquired before treatment. An offline iterative learning process has been designed to derive a target volumetric deformation function from one breathing phase to another. Real-time volumetric prediction is performed to derive the target 3D volume during treatment delivery. The proposed iterative deformable approach for tumor volume morphing and prediction based on 4DCT is innovative because it makes three major contributions: (1) a novel approach to landmark selection on 3D tumor surfaces using a minimum bounding box; (2) an iterative morphing algorithm to generate the 3D tumor volume using mapped landmarks; and (3) an online tumor volume prediction strategy based on previously trained deformation functions utilizing 4DCT. The experimental performance showed that the maximum morphing deviations are 0.27% and 1.25% for original patient data and artificially generated data, which is promising. This newly developed algorithm and implementation will have important applications for treatment planning, dose calculation and treatment validation in cancer radiation treatment.

  10. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    Directory of Open Access Journals (Sweden)

    Peigang Ning

    Full Text Available OBJECTIVE: This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR and model-based iterative reconstruction (MBIR algorithms in reducing computed tomography (CT radiation dosages in abdominal imaging. METHODS: CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP, 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol were recorded. RESULTS: At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. CONCLUSIONS: Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  11. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  12. Q-learning-based adjustable fixed-phase quantum Grover search algorithm

    International Nuclear Information System (INIS)

    Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun

    2017-01-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)

  13. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    Science.gov (United States)

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  14. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  15. Identifying elementary iterated systems through algorithmic inference: The Cantor set example

    Energy Technology Data Exchange (ETDEWEB)

    Apolloni, Bruno [Dipartimento di Scienze dell' Informazione, Universita degli Studi di Milano, Via Comelico 39/41, 20135 Milan (Italy)]. E-mail: apolloni@dsi.unimi.it; Bassis, Simone [Dipartimento di Scienze dell' Informazione, Universita degli Studi di Milano, Via Comelico 39/41, 20135 Milan (Italy)]. E-mail: bassis@dsi.unimi.it

    2006-10-15

    We come back to the old problem of fractal identification within the new framework of algorithmic Inference. The key points are: (i) to identify sufficient statistics to be put in connection with the unknown values of the fractal parameters, and (ii) to manage the timing of the iterated process through spatial statistics. We fill these tasks successfully with the Cantor sets. We are able to compute confidence intervals for both the scaling parameter {theta} and the iteration number n at which we are observing a set. We both check numerically the coverage of these intervals and delineate a general strategy for affording more complex iterated systems.

  16. A Gradient Based Iterative Solutions for Sylvester Tensor Equations

    Directory of Open Access Journals (Sweden)

    Zhen Chen

    2013-01-01

    proposed by Ding and Chen, 2005, and by using tensor arithmetic concepts, an iterative algorithm and its modification are established to solve the Sylvester tensor equation. Convergence analysis indicates that the iterative solutions always converge to the exact solution for arbitrary initial value. Finally, some examples are provided to show that the proposed algorithms are effective.

  17. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  18. An iterative algorithm for calculating stylus radius unambiguously

    International Nuclear Information System (INIS)

    Vorburger, T V; Zheng, A; Renegar, T B; Song, J-F; Ma, L

    2011-01-01

    The stylus radius is an important specification for stylus instruments and is commonly provided by instrument manufacturers. However, it is difficult to measure the stylus radius unambiguously. Accurate profiles of the stylus tip may be obtained by profiling over an object sharper than itself, such as a razor blade. However, the stylus profile thus obtained is a partial arc, and unless the shape of the stylus tip is a perfect sphere or circle, the effective value of the radius depends on the length of the tip profile over which the radius is determined. We have developed an iterative, least squares algorithm aimed to determine the effective least squares stylus radius unambiguously. So far, the algorithm converges to reasonable results for the least squares stylus radius. We suggest that the algorithm be considered for adoption in documentary standards describing the properties of stylus instruments.

  19. Submillisievert Computed Tomography of the Chest Using Model-Based Iterative Algorithm: Optimization of Tube Voltage With Regard to Patient Size.

    Science.gov (United States)

    Deák, Zsuzsanna; Maertz, Friedrich; Meurer, Felix; Notohamiprodjo, Susan; Mueck, Fabian; Geyer, Lucas L; Reiser, Maximilian F; Wirth, Stefan

    The aim of this study was to define optimal tube potential for soft tissue and vessel visualization in dose-reduced chest CT protocols using model-based iterative algorithm in average and overweight patients. Thirty-six patients receiving chest CT according to 3 protocols (120 kVp/noise index [NI], 60; 100 kVp/NI, 65; 80 kVp/NI, 70) were included in this prospective study, approved by the ethics committee. Patients' physical parameters and dose descriptors were recorded. Images were reconstructed with model-based algorithm. Two radiologists evaluated image quality and lesion conspicuity; the protocols were intraindividually compared with preceding control CT reconstructed with statistical algorithm (120 kVp/NI, 20). Mean and standard deviation of attenuation of the muscle and fat tissues and signal-to-noise ratio of the aorta were measured. Diagnostic images (lesion conspicuity, 95%-100%) were acquired in average and overweight patients at 1.34, 1.02, and 1.08 mGy and at 3.41, 3.20, and 2.88 mGy at 120, 100, and 80 kVp, respectively. Data are given as CT dose index volume values. Model-based algorithm allows for submillisievert chest CT in average patients; the use of 100 kVp is recommended.

  20. High resolution reconstruction of PET images using the iterative OSEM algorithm

    International Nuclear Information System (INIS)

    Doll, J.; Bublitz, O.; Werling, A.; Haberkorn, U.; Semmler, W.; Adam, L.E.; Pennsylvania Univ., Philadelphia, PA; Brix, G.

    2004-01-01

    Aim: Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. Methods: All measurements were performed at the whole-body PET system ECAT EXACT HR + in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Results: Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. Conclusion: The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals. (orig.)

  1. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Okariz, Ana, E-mail: ana.okariz@ehu.es [eMERG, Fisika Aplikatua I Saila, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Guraya, Teresa [eMERG, Departamento de Ingeniería Minera y Metalúrgica y Ciencia de los Materiales, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Iturrondobeitia, Maider [eMERG, Departamento de Expresión Gráfica y Proyectos de Ingeniería, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Ibarretxe, Julen [eMERG, Fisika Aplikatua I Saila, Faculty of Engineering,University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 2, 48013 Bilbao (Spain)

    2017-02-15

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. - Highlights: • The non uniformity of the resolution in electron tomography reconstructions has been demonstrated. • An overall resolution for the evaluation of the quality of electron tomography reconstructions has been defined. • Parameters for estimating an overall resolution across the reconstructed volume have been proposed. • The overall resolution of the reconstructions of a phantom has been estimated from the probability density functions. • It has been proven that reconstructions with the best overall resolutions have provided the most accurate segmentations.

  2. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography

    International Nuclear Information System (INIS)

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-01-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. - Highlights: • The non uniformity of the resolution in electron tomography reconstructions has been demonstrated. • An overall resolution for the evaluation of the quality of electron tomography reconstructions has been defined. • Parameters for estimating an overall resolution across the reconstructed volume have been proposed. • The overall resolution of the reconstructions of a phantom has been estimated from the probability density functions. • It has been proven that reconstructions with the best overall resolutions have provided the most accurate segmentations.

  3. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  4. Precise fixpoint computation through strategy iteration

    DEFF Research Database (Denmark)

    Gawlitza, Thomas; Seidl, Helmut

    2007-01-01

    We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent of the s......We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent...

  5. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    Science.gov (United States)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  6. A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models

    International Nuclear Information System (INIS)

    Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A

    2012-01-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)

  7. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    Science.gov (United States)

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  8. A kind of iteration algorithm for fast wave heating

    International Nuclear Information System (INIS)

    Zhu Xueguang; Kuang Guangli; Zhao Yanping; Li Youyi; Xie Jikang

    1998-03-01

    The standard normal distribution for particles in Tokamak geometry is usually assumed in fast wave heating. In fact, due to the quasi-linear diffusion effect, the parallel and vertical temperature of resonant particles is not equal, so, this will bring some error. For this case, the Fokker-Planck equation is introduced, and iteration algorithm is adopted to solve the problem well

  9. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  11. An iterative algorithm for solving the multidimensional neutron diffusion nodal method equations on parallel computers

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1992-01-01

    In this paper the one-group, steady-state neutron diffusion equation in two-dimensional Cartesian geometry is solved using the nodal integral method. The discrete variable equations comprise loosely coupled sets of equations representing the nodal balance of neutrons, as well as neutron current continuity along rows or columns of computational cells. An iterative algorithm that is more suitable for solving large problems concurrently is derived based on the decomposition of the spatial domain and is accelerated using successive overrelaxation. This algorithm is very well suited for parallel computers, especially since the spatial domain decomposition occurs naturally, so that the number of iterations required for convergence does not depend on the number of processors participating in the calculation. Implementation of the authors' algorithm on the Intel iPSC/2 hypercube and Sequent Balance 8000 parallel computer is presented, and measured speedup and efficiency for test problems are reported. The results suggest that the efficiency of the hypercube quickly deteriorates when many processors are used, while the Sequent Balance retains very high efficiency for a comparable number of participating processors. This leads to the conjecture that message-passing parallel computers are not as well suited for this algorithm as shared-memory machines

  12. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm.

    Directory of Open Access Journals (Sweden)

    Higinio Mora

    Full Text Available The Iterative Closest Point (ICP algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results.

  13. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  14. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  15. Non-homogeneous updates for the iterative coordinate descent algorithm

    Science.gov (United States)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  16. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    Science.gov (United States)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  17. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Yazan M. Alomari

    2014-01-01

    Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.

  18. Distributed interference alignment iterative algorithms in symmetric wireless network

    Directory of Open Access Journals (Sweden)

    YANG Jingwen

    2015-02-01

    Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.

  19. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs.

    Science.gov (United States)

    Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan

    2017-12-13

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability

  20. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2017-12-01

    Full Text Available Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN. Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS. This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime

  1. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    International Nuclear Information System (INIS)

    Samei, Ehsan; Richard, Samuel

    2015-01-01

    indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks

  2. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    Energy Technology Data Exchange (ETDEWEB)

    Samei, Ehsan, E-mail: samei@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Clinical Imaging Physics Group, Departments of Radiology, Physics, Biomedical Engineering, and Electrical and Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University, Durham, North Carolina 27710 (United States)

    2015-01-15

    indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.

  3. Iterative Algorithm for Solving a Class of Quaternion Matrix Equation over the Generalized (P,Q-Reflexive Matrices

    Directory of Open Access Journals (Sweden)

    Ning Li

    2013-01-01

    Full Text Available The matrix equation ∑l=1uAlXBl+∑s=1vCsXTDs=F, which includes some frequently investigated matrix equations as its special cases, plays important roles in the system theory. In this paper, we propose an iterative algorithm for solving the quaternion matrix equation ∑l=1uAlXBl+∑s=1vCsXTDs=F over generalized (P,Q-reflexive matrices. The proposed iterative algorithm automatically determines the solvability of the quaternion matrix equation over generalized (P,Q-reflexive matrices. When the matrix equation is consistent over generalized (P,Q-reflexive matrices, the sequence {X(k} generated by the introduced algorithm converges to a generalized (P,Q-reflexive solution of the quaternion matrix equation. And the sequence {X(k} converges to the least Frobenius norm generalized (P,Q-reflexive solution of the quaternion matrix equation when an appropriate initial iterative matrix is chosen. Furthermore, the optimal approximate generalized (P,Q-reflexive solution for a given generalized (P,Q-reflexive matrix X0 can be derived. The numerical results indicate that the iterative algorithm is quite efficient.

  4. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  5. Convergence of SART + OS + TV iterative reconstruction algorithm for optical CT imaging of gel dosimeters

    International Nuclear Information System (INIS)

    Du, Yi; Yu, Gongyi; Xiang, Xincheng; Wang, Xiangang; De Deene, Yves

    2017-01-01

    Computational simulations are used to investigate the convergence of a hybrid iterative algorithm for optical CT reconstruction, i.e. the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization, or SART+OS+TV for short. The influence of parameter selection to reach convergence, spatial dose gradient integrity, MTF and convergent speed are discussed. It’s shown that the results of SART+OS+TV algorithm converge to the true values without significant bias, and MTF and convergent speed are affected by different parameter sets used for iterative calculation. In conclusion, the performance of the SART+OS+TV depends on parameter selection, which also implies that careful parameter tuning work is required and necessary for proper spatial performance and fast convergence. (paper)

  6. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    Science.gov (United States)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  7. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    Science.gov (United States)

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  8. Variable aperture-based ptychographical iterative engine method

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  9. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  10. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  11. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    International Nuclear Information System (INIS)

    Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long

    2016-01-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)

  12. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    Energy Technology Data Exchange (ETDEWEB)

    He, Hongxing; Fang, Hengrui [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States); Miller, Mitchell D. [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Phillips, George N. Jr [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Department of Biochemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Su, Wu-Pei, E-mail: wpsu@uh.edu [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States)

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.

  13. New matrix bounds and iterative algorithms for the discrete coupled algebraic Riccati equation

    Science.gov (United States)

    Liu, Jianzhou; Wang, Li; Zhang, Juan

    2017-11-01

    The discrete coupled algebraic Riccati equation (DCARE) has wide applications in control theory and linear system. In general, for the DCARE, one discusses every term of the coupled term, respectively. In this paper, we consider the coupled term as a whole, which is different from the recent results. When applying eigenvalue inequalities to discuss the coupled term, our method has less error. In terms of the properties of special matrices and eigenvalue inequalities, we propose several upper and lower matrix bounds for the solution of DCARE. Further, we discuss the iterative algorithms for the solution of the DCARE. In the fixed point iterative algorithms, the scope of Lipschitz factor is wider than the recent results. Finally, we offer corresponding numerical examples to illustrate the effectiveness of the derived results.

  14. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  15. Iterative learning control an optimization paradigm

    CERN Document Server

    Owens, David H

    2016-01-01

    This book develops a coherent theoretical approach to algorithm design for iterative learning control based on the use of optimization concepts. Concentrating initially on linear, discrete-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately because their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates that there are new algorithms that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference signals and also to support new algorithms for local convergence of nonlinear iterative control. Simulation and application studies are used to illustrate algorithm properties and performance in systems like gantry robots and other elect...

  16. Variable aperture-based ptychographical iterative engine method.

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    Science.gov (United States)

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  18. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    Science.gov (United States)

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 m

  19. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    International Nuclear Information System (INIS)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng; Fung, Russell; Zhu Chun; Miao Jianwei; Mao Yu; Khatonabadi, Maryam; DeMarco, John J.; McNitt-Gray, Michael F.; Osher, Stanley J.

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest

  20. A Superlinearly Convergent O(square root of nL)-Iteration Algorithm for Linear Programming

    National Research Council Canada - National Science Library

    Ye, Y; Tapia, Richard A; Zhang, Y

    1991-01-01

    .... We demonstrate that the modified algorithm maintains its O(square root of nL)-iteration complexity, while exhibiting superlinear convergence for general problems and quadratic convergence for nondegenerate problems...

  1. Convergence and resolution recovery of block-iterative EM algorithms modeling 3D detector response in SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.

    1996-01-01

    We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly

  2. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    Science.gov (United States)

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  3. Comparing two iteration algorithms of Broyden electron density mixing through an atomic electronic structure computation

    International Nuclear Information System (INIS)

    Zhang Man-Hong

    2016-01-01

    By performing the electronic structure computation of a Si atom, we compare two iteration algorithms of Broyden electron density mixing in the literature. One was proposed by Johnson and implemented in the well-known VASP code. The other was given by Eyert. We solve the Kohn-Sham equation by using a conventional outward/inward integration of the differential equation and then connect two parts of solutions at the classical turning points, which is different from the method of the matrix eigenvalue solution as used in the VASP code. Compared to Johnson’s algorithm, the one proposed by Eyert needs fewer total iteration numbers. (paper)

  4. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  5. MPPT-Based Control Algorithm for PV System Using iteration-PSO under Irregular shadow Conditions

    Directory of Open Access Journals (Sweden)

    M. Abdulkadir

    2017-02-01

    Full Text Available The conventional maximum power point tracking (MPPT techniques can hardly track the global maximum power point (GMPP because the power-voltage characteristics of photovoltaic (PV exhibit multiple local peaks in irregular shadow, and therefore easily fall into the local maximum power point. These conditions make it very challenging, and to tackle this deficiency, an efficient Iteration Particle Swarm Optimization (IPSO has been developed to improve the quality of solution and convergence speed of the traditional PSO, so that it can effectively track the GMPP under irregular shadow conditions. This proposed technique has such advantages as simple structure, fast response and strong robustness, and convenient implementation. It is applied to MPPT control of PV system in irregular shadow to solve the problem of multi-peak optimization in partial shadow. In order to verify the rationality of the proposed algorithm, however, recently the dynamic MPPT performance under varying irradiance conditions has been given utmost attention to the PV society. As the European standard EN 50530 which defines the recommended varying irradiance profiles, was released lately, the corresponding researchers have been required to improve the dynamic MPPT performance. This paper tried to evaluate the dynamic MPPT performance using EN 50530 standard. The simulation results show that iterative-PSO method can fast track the global MPP, increase tracking speed and higher dynamic MPPT efficiency under EN 50530 than the conventional PSO.

  6. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    Science.gov (United States)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  7. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    Science.gov (United States)

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Transmission-less attenuation correction in time-of-flight PET: analysis of a discrete iterative algorithm

    International Nuclear Information System (INIS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2014-01-01

    The maximum likelihood attenuation correction factors (MLACF) algorithm has been developed to calculate the maximum-likelihood estimate of the activity image and the attenuation sinogram in time-of-flight (TOF) positron emission tomography, using only emission data without prior information on the attenuation. We consider the case of a Poisson model of the data, in the absence of scatter or random background. In this case the maximization with respect to the attenuation factors can be achieved in a closed form and the MLACF algorithm works by updating the activity. Despite promising numerical results, the convergence of this algorithm has not been analysed. In this paper we derive the algorithm and demonstrate that the MLACF algorithm monotonically increases the likelihood, is asymptotically regular, and that the limit points of the iteration are stationary points of the likelihood. Because the problem is not convex, however, the limit points might be saddle points or local maxima. To obtain some empirical insight into the latter question, we present data obtained by applying MLACF to 2D simulated TOF data, using a large number of iterations and different initializations. (paper)

  10. A flexibility-based method via the iterated improved reduction system and the cuckoo optimization algorithm for damage quantification with limited sensors

    International Nuclear Information System (INIS)

    Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Bagheri, Abdollah; Koo, Ki-Young

    2014-01-01

    In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)

  11. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    International Nuclear Information System (INIS)

    Puchner, Stefan B.; Ferencik, Maros; Maurovich-Horvat, Pal; Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu; Kauczor, Hans-Ulrich; Hoffmann, Udo; Schlett, Christopher L.

    2015-01-01

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area 2 : 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  12. Alignment Condition-Based Robust Adaptive Iterative Learning Control of Uncertain Robot System

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available This paper proposes an adaptive iterative learning control strategy integrated with saturation-based robust control for uncertain robot system in presence of modelling uncertainties, unknown parameter, and external disturbance under alignment condition. An important merit is that it achieves adaptive switching of gain matrix both in conventional PD-type feedforward control and robust adaptive control in the iteration domain simultaneously. The analysis of convergence of proposed control law is based on Lyapunov's direct method under alignment initial condition. Simulation results demonstrate the faster learning rate and better robust performance with proposed algorithm by comparing with other existing robust controllers. The actual experiment on three-DOF robot manipulator shows its better practical effectiveness.

  13. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  14. Iterated Local Search Algorithm with Strategic Oscillation for School Bus Routing Problem with Bus Stop Selection

    Directory of Open Access Journals (Sweden)

    Mohammad Saied Fallah Niasar

    2017-02-01

    Full Text Available he school bus routing problem (SBRP represents a variant of the well-known vehicle routing problem. The main goal of this study is to pick up students allocated to some bus stops and generate routes, including the selected stops, in order to carry students to school. In this paper, we have proposed a simple but effective metaheuristic approach that employs two features: first, it utilizes large neighborhood structures for a deeper exploration of the search space; second, the proposed heuristic executes an efficient transition between the feasible and infeasible portions of the search space. Exploration of the infeasible area is controlled by a dynamic penalty function to convert the unfeasible solution into a feasible one. Two metaheuristics, called N-ILS (a variant of the Nearest Neighbourhood with Iterated Local Search algorithm and I-ILS (a variant of Insertion with Iterated Local Search algorithm are proposed to solve SBRP. Our experimental procedure is based on the two data sets. The results show that N-ILS is able to obtain better solutions in shorter computing times. Additionally, N-ILS appears to be very competitive in comparison with the best existing metaheuristics suggested for SBRP

  15. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    Science.gov (United States)

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  16. Multi-stage phase retrieval algorithm based upon the gyrator transform.

    Science.gov (United States)

    Rodrigo, José A; Duadi, Hamootal; Alieva, Tatiana; Zalevsky, Zeev

    2010-01-18

    The gyrator transform is a useful tool for optical information processing applications. In this work we propose a multi-stage phase retrieval approach based on this operation as well as on the well-known Gerchberg-Saxton algorithm. It results in an iterative algorithm able to retrieve the phase information using several measurements of the gyrator transform power spectrum. The viability and performance of the proposed algorithm is demonstrated by means of several numerical simulations and experimental results.

  17. Multi-stage phase retrieval algorithm based upon the gyrator transform

    OpenAIRE

    Rodrigo Martín-Romo, José Augusto; Duadi, Hamootal; Alieva, Tatiana Krasheninnikova; Zalevsky, Zeev

    2010-01-01

    The gyrator transform is a useful tool for optical information processing applications. In this work we propose a multi-stage phase retrieval approach based on this operation as well as on the well-known Gerchberg-Saxton algorithm. It results in an iterative algorithm able to retrieve the phase information using several measurements of the gyrator transform power spectrum. The viability and performance of the proposed algorithm is demonstrated by means of several numerical simulations and exp...

  18. Multi-level iteration optimization for diffusive critical calculation

    International Nuclear Information System (INIS)

    Li Yunzhao; Wu Hongchun; Cao Liangzhi; Zheng Youqi

    2013-01-01

    In nuclear reactor core neutron diffusion calculation, there are usually at least three levels of iterations, namely the fission source iteration, the multi-group scattering source iteration and the within-group iteration. Unnecessary calculations occur if the inner iterations are converged extremely tight. But the convergence of the outer iteration may be affected if the inner ones are converged insufficiently tight. Thus, a common scheme suit for most of the problems was proposed in this work to automatically find the optimized settings. The basic idea is to optimize the relative error tolerance of the inner iteration based on the corresponding convergence rate of the outer iteration. Numerical results of a typical thermal neutron reactor core problem and a fast neutron reactor core problem demonstrate the effectiveness of this algorithm in the variational nodal method code NODAL with the Gauss-Seidel left preconditioned multi-group GMRES algorithm. The multi-level iteration optimization scheme reduces the number of multi-group and within-group iterations respectively by a factor of about 1-2 and 5-21. (authors)

  19. A novel block cryptosystem based on iterating a chaotic map

    International Nuclear Information System (INIS)

    Xiang Tao; Liao Xiaofeng; Tang Guoping; Chen Yong; Wong, Kwok-wo

    2006-01-01

    A block cryptographic scheme based on iterating a chaotic map is proposed. With random binary sequences generated from the real-valued chaotic map, the plaintext block is permuted by a key-dependent shift approach and then encrypted by the classical chaotic masking technique. Simulation results show that performance and security of the proposed cryptographic scheme are better than those of existing algorithms. Advantages and security of our scheme are also discussed in detail

  20. Predictive Variable Gain Iterative Learning Control for PMSM

    Directory of Open Access Journals (Sweden)

    Huimin Xu

    2015-01-01

    Full Text Available A predictive variable gain strategy in iterative learning control (ILC is introduced. Predictive variable gain iterative learning control is constructed to improve the performance of trajectory tracking. A scheme based on predictive variable gain iterative learning control for eliminating undesirable vibrations of PMSM system is proposed. The basic idea is that undesirable vibrations of PMSM system are eliminated from two aspects of iterative domain and time domain. The predictive method is utilized to determine the learning gain in the ILC algorithm. Compression mapping principle is used to prove the convergence of the algorithm. Simulation results demonstrate that the predictive variable gain is superior to constant gain and other variable gains.

  1. Iterative algorithms for the input and state recovery from the approximate inverse of strictly proper multivariable systems

    Science.gov (United States)

    Chen, Liwen; Xu, Qiang

    2018-02-01

    This paper proposes new iterative algorithms for the unknown input and state recovery from the system outputs using an approximate inverse of the strictly proper linear time-invariant (LTI) multivariable system. One of the unique advantages from previous system inverse algorithms is that the output differentiation is not required. The approximate system inverse is stable due to the systematic optimal design of a dummy feedthrough D matrix in the state-space model via the feedback stabilization. The optimal design procedure avoids trial and error to identify such a D matrix which saves tremendous amount of efforts. From the derived and proved convergence criteria, such an optimal D matrix also guarantees the convergence of algorithms. Illustrative examples show significant improvement of the reference input signal tracking by the algorithms and optimal D design over non-iterative counterparts on controllable or stabilizable LTI systems, respectively. Case studies of two Boeing-767 aircraft aerodynamic models further demonstrate the capability of the proposed methods.

  2. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  3. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  4. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  5. An iterative algorithm for the finite element approximation to convection-diffusion problems

    International Nuclear Information System (INIS)

    Buscaglia, Gustavo; Basombrio, Fernando

    1988-01-01

    An iterative algorithm for steady convection-diffusion is presented, which avoids unsymmetric matrices by means of an equivalent mixed formulation. Upwind is introduced by adding a balancing dissipation in the flow direction, but there is no dependence of the global matrix on the velocity field. Convergence is shown in habitual test cases. Advantages of its use in coupled calculation of more complex problems are discussed. (Author)

  6. An ART iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Zhang Li; Huang Zhifeng; Kang Kejun; Chen Zhiqiang; Fang Qiaoguang; Zhu Peiping

    2009-01-01

    X-ray diffraction enhanced imaging (DEI) has extremely high sensitivity for weakly absorbing low-Z samples in medical and biological fields. In this paper, we propose an Algebra Reconstruction Technique (ART) iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging (DEI-CT). An Ordered Subsets (OS) technique is used to accelerate the ART reconstruction. Few-view reconstruction is also studied, and a partial differential equation (PDE) type filter which has the ability of edge-preserving and denoising is used to improve the image quality and eliminate the artifacts. The proposed algorithm is validated with both the numerical simulations and the experiment at the Beijing synchrotron radiation facility (BSRF). (authors)

  7. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    Science.gov (United States)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  8. Estimation of POL-iteration methods in fast running DNBR code

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H. [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, various root finding methods are applied to the POL-iteration module in SCOMS and POLiteration efficiency is compared with reference method. On the base of these results, optimum algorithm of POL iteration is selected. The POL requires the iteration until present local power reach limit power. The process to search the limiting power is equivalent with a root finding of nonlinear equation. POL iteration process involved in online monitoring system used a variant bisection method that is the most robust algorithm to find the root of nonlinear equation. The method including the interval accelerating factor and escaping routine out of ill-posed condition assured the robustness of SCOMS system. POL iteration module in SCOMS shall satisfy the requirement which is a minimum calculation time. For this requirement of calculation time, non-iterative algorithm, few channel model, simple steam table are implemented into SCOMS to improve the calculation time. MDNBR evaluation at a given operating condition requires the DNBR calculation at all axial locations. An increasing of POL-iteration number increased a calculation load of SCOMS significantly. Therefore, calculation efficiency of SCOMS is strongly dependent on the POL iteration number. In case study, the iterations of the methods have a superlinear convergence for finding limiting power but Brent method shows a quardratic convergence speed. These methods are effective and better than the reference bisection algorithm.

  9. Bounded-Angle Iterative Decoding of LDPC Codes

    Science.gov (United States)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  10. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    Science.gov (United States)

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  11. Objective task-based assessment of low-contrast detectability in iterative reconstruction

    International Nuclear Information System (INIS)

    Racine, Damien; Ott, Julien G.; Ba, Alexandre; Ryckx, Nick; Bochud, Francois O.; Verdun, Francis R.

    2016-01-01

    Evaluating image quality by using receiver operating characteristic studies is time consuming and difficult to implement. This work assesses a new iterative algorithm using a channelised Hotelling observer (CHO). For this purpose, an anthropomorphic abdomen phantom with spheres of various sizes and contrasts was scanned at 3 volume computed tomography dose index (CTDI vol ) levels on a GE Revolution CT. Images were reconstructed using the iterative reconstruction method adaptive statistical iterative reconstruction-V (ASIR-V) at ASIR-V 0, 50 and 70 % and assessed by applying a CHO with dense difference of Gaussian and internal noise. Both CHO and human observers (HO) were compared based on a four-alternative forced-choice experiment, using the percentage correct as a figure of merit. The results showed accordance between CHO and HO. Moreover, an improvement in the low-contrast detection was observed when switching from ASIR-V 0 to 50 %. The results underpin the finding that ASIR-V allows dose reduction. (authors)

  12. A neutron spectrum unfolding code based on iterative procedures

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a 6 Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a 241 AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)

  13. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  14. Local fractional variational iteration algorithm iii for the diffusion model associated with non-differentiable heat transfer

    Directory of Open Access Journals (Sweden)

    Meng Zhi-Jun

    2016-01-01

    Full Text Available This paper addresses a new application of the local fractional variational iteration algorithm III to solve the local fractional diffusion equation defined on Cantor sets associated with non-differentiable heat transfer.

  15. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    Science.gov (United States)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  16. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)

    2014-07-15

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model-based

  17. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.

    2014-01-01

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model-based

  18. NUFFT-Based Iterative Image Reconstruction via Alternating Direction Total Variation Minimization for Sparse-View CT

    Directory of Open Access Journals (Sweden)

    Bin Yan

    2015-01-01

    Full Text Available Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT. Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.

  19. Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.

  20. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  1. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  2. CAS algorithm-based optimum design of PID controller in AVR system

    International Nuclear Information System (INIS)

    Zhu Hui; Li Lixiang; Zhao Ying; Guo Yu; Yang Yixian

    2009-01-01

    This paper presents a novel design method for determining the optimal PID controller parameters of an automatic voltage regulator (AVR) system using the chaotic ant swarm (CAS) algorithm. In the tuning process of parameters, the CAS algorithm is iterated to give the optimal parameters of the PID controller based on the fitness theory, where the position vector of each ant in the CAS algorithm corresponds to the parameter vector of the PID controller. The proposed CAS-PID controllers can ensure better control system performance with respect to the reference input in comparison with GA-PID controllers. Numerical simulations are provided to verify the effectiveness and feasibility of PID controller based on CAS algorithm.

  3. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)

    2017-08-15

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)

  4. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  5. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  6. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    Science.gov (United States)

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  7. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2014-09-01

    Full Text Available This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and

  8. Game Algorithm for Resource Allocation Based on Intelligent Gradient in HetNet

    Directory of Open Access Journals (Sweden)

    Fang Ye

    2017-02-01

    Full Text Available In order to improve system performance such as throughput, heterogeneous network (HetNet has become an effective solution in Long Term Evolution-Advanced (LET-A. However, co-channel interference leads to degradation of the HetNet throughput, because femtocells are always arranged to share the spectrum with the macro base station. In this paper, in view of the serious cross-layer interference in double layer HetNet, the Stackelberg game model is adopted to analyze the resource allocation methods of the network. Unlike the traditional system models only focusing on macro base station performance improvement, we take into account the overall system performance and build a revenue function with convexity. System utility functions are defined as the average throughput, which does not adopt frequency spectrum trading method, so as to avoid excessive signaling overhead. Due to the value scope of continuous Nash equilibrium of the built game model, the gradient iterative algorithm is introduced to reduce the computational complexity. As for the solution of Nash equilibrium, one kind of gradient iterative algorithm is proposed, which is able to intelligently choose adjustment factors. The Nash equilibrium can be quickly solved; meanwhile, the step of presetting adjustment factors is avoided according to network parameters in traditional linear iterative model. Simulation results show that the proposed algorithm enhances the overall performance of the system.

  9. Otsu Based Optimal Multilevel Image Thresholding Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    N. Sri Madhava Raja

    2014-01-01

    Full Text Available Histogram based multilevel thresholding approach is proposed using Brownian distribution (BD guided firefly algorithm (FA. A bounded search technique is also presented to improve the optimization accuracy with lesser search iterations. Otsu’s between-class variance function is maximized to obtain optimal threshold level for gray scale images. The performances of the proposed algorithm are demonstrated by considering twelve benchmark images and are compared with the existing FA algorithms such as Lévy flight (LF guided FA and random operator guided FA. The performance assessment comparison between the proposed and existing firefly algorithms is carried using prevailing parameters such as objective function, standard deviation, peak-to-signal ratio (PSNR, structural similarity (SSIM index, and search time of CPU. The results show that BD guided FA provides better objective function, PSNR, and SSIM, whereas LF based FA provides faster convergence with relatively lower CPU time.

  10. A Nodes Deployment Algorithm in Wireless Sensor Network Based on Distribution

    Directory of Open Access Journals (Sweden)

    Song Yuli

    2014-07-01

    Full Text Available Wireless sensor network coverage is a basic problem of wireless sensor network. In this paper, we propose a wireless sensor network node deployment algorithm base on distribution in order to form an efficient wireless sensor network. The iteratively greedy algorithm is used in this paper to choose priority nodes into active until the entire network is covered by wireless sensor nodes, the whole network to multiply connected. The simulation results show that the distributed wireless sensor network node deployment algorithm can form a multiply connected wireless sensor network.

  11. An Algorithm for Isolating the Real Solutions of Piecewise Algebraic Curves

    Directory of Open Access Journals (Sweden)

    Jinming Wu

    2011-01-01

    to compute the real solutions of two piecewise algebraic curves. It is primarily based on the Krawczyk-Moore iterative algorithm and good initial iterative interval searching algorithm. The proposed algorithm is relatively easy to implement.

  12. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  13. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  14. 3D dictionary learning based iterative cone beam CT reconstruction

    Directory of Open Access Journals (Sweden)

    Ti Bai

    2014-03-01

    Full Text Available Purpose: This work is to develop a 3D dictionary learning based cone beam CT (CBCT reconstruction algorithm on graphic processing units (GPU to improve the quality of sparse-view CBCT reconstruction with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms of 3 × 3 × 3 was trained from a large number of blocks extracted from a high quality volume image. On the basis, we utilized cholesky decomposition based orthogonal matching pursuit algorithm to find the sparse representation of each block. To accelerate the time-consuming sparse coding in the 3D case, we implemented the sparse coding in a parallel fashion by taking advantage of the tremendous computational power of GPU. Conjugate gradient least square algorithm was adopted to minimize the data fidelity term. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with tight frame (TF by performing reconstructions on a subset data of 121 projections. Results: Compared to TF based CBCT reconstruction that shows good overall performance, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, remove more streaking artifacts and also induce less blocky artifacts. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppress the noise, and hence to achieve high quality reconstruction under the case of sparse view. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application.-------------------------------Cite this article as: Bai T, Yan H, Shi F, Jia X, Lou Y, Xu Q, Jiang S, Mou X. 3D dictionary learning based iterative cone beam CT reconstruction. Int J Cancer Ther Oncol 2014; 2(2:020240. DOI: 10

  15. Scheduling of Iterative Algorithms with Matrix Operations for Efficient FPGA Design—Implementation of Finite Interval Constant Modulus Algorithm

    Czech Academy of Sciences Publication Activity Database

    Šůcha, P.; Hanzálek, Z.; Heřmánek, Antonín; Schier, Jan

    2007-01-01

    Roč. 46, č. 1 (2007), s. 35-53 ISSN 0922-5773 R&D Projects: GA AV ČR(CZ) 1ET300750402; GA MŠk(CZ) 1M0567; GA MPO(CZ) FD-K3/082 Institutional research plan: CEZ:AV0Z10750506 Keywords : high-level synthesis * cyclic scheduling * iterative algorithms * imperfectly nested loops * integer linear programming * FPGA * VLSI design * blind equalization * implementation Subject RIV: BA - General Mathematics Impact factor: 0.449, year: 2007 http://www.springerlink.com/content/t217kg0822538014/fulltext.pdf

  16. A general class of preconditioners for statistical iterative reconstruction of emission computed tomography

    International Nuclear Information System (INIS)

    Chinn, G.; Huang, S.C.

    1997-01-01

    A major drawback of statistical iterative image reconstruction for emission computed tomography is its high computational cost. The ill-posed nature of tomography leads to slow convergence for standard gradient-based iterative approaches such as the steepest descent or the conjugate gradient algorithm. In this paper new theory and methods for a class of preconditioners are developed for accelerating the convergence rate of iterative reconstruction. To demonstrate the potential of this class of preconditioners, a preconditioned conjugate gradient (PCG) iterative algorithm for weighted least squares reconstruction (WLS) was formulated for emission tomography. Using simulated positron emission tomography (PET) data of the Hoffman brain phantom, it was shown that the convergence rate of the PCG can reduce the number of iterations of the standard conjugate gradient algorithm by a factor of 2--8 times depending on the convergence criterion

  17. Robust Huber-based iterated divided difference filtering with application to cooperative localization of autonomous underwater vehicles.

    Science.gov (United States)

    Gao, Wei; Liu, Yalong; Xu, Bo

    2014-12-19

    A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.

  18. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    Science.gov (United States)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  19. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    Science.gov (United States)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  20. Iterative algorithms for computing the feedback Nash equilibrium point for positive systems

    Science.gov (United States)

    Ivanov, I.; Imsland, Lars; Bogdanova, B.

    2017-03-01

    The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.

  1. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  2. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  3. MAPCUMBA: A fast iterative multi-grid map-making algorithm for CMB experiments

    Science.gov (United States)

    Doré, O.; Teyssier, R.; Bouchet, F. R.; Vibert, D.; Prunet, S.

    2001-07-01

    The data analysis of current Cosmic Microwave Background (CMB) experiments like BOOMERanG or MAXIMA poses severe challenges which already stretch the limits of current (super-) computer capabilities, if brute force methods are used. In this paper we present a practical solution for the optimal map making problem which can be used directly for next generation CMB experiments like ARCHEOPS and TopHat, and can probably be extended relatively easily to the full PLANCK case. This solution is based on an iterative multi-grid Jacobi algorithm which is both fast and memory sparing. Indeed, if there are Ntod data points along the one dimensional timeline to analyse, the number of operations is of O (Ntod \\ln Ntod) and the memory requirement is O (Ntod). Timing and accuracy issues have been analysed on simulated ARCHEOPS and TopHat data, and we discuss as well the issue of the joint evaluation of the signal and noise statistical properties.

  4. Higher-order force gradient symplectic algorithms

    Science.gov (United States)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  5. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    OpenAIRE

    Soleimani, Farahnaz; Stanimirovi´c, Predrag; Soleymani, Fazlollah

    2015-01-01

    An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the ...

  6. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  7. A time domain phase-gradient based ISAR autofocus algorithm

    CSIR Research Space (South Africa)

    Nel, W

    2011-10-01

    Full Text Available . Results on simulated and measured data show that the algorithm performs well. Unlike many other ISAR autofocus techniques, the algorithm does not make use of several computationally intensive iterations between the data and image domains as part...

  8. Iterative local Chi2 alignment algorithm for the ATLAS Pixel detector

    CERN Document Server

    Göttfert, Tobias

    The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.

  9. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  10. Evaluation of global synchronization for iterative algebra algorithms on many-core

    KAUST Repository

    ul Hasan Khan, Ayaz; Al-Mouhamed, Mayez; Firdaus, Lutfi A.

    2015-01-01

    © 2015 IEEE. Massively parallel computing is applied extensively in various scientific and engineering domains. With the growing interest in many-core architectures and due to the lack of explicit support for inter-block synchronization specifically in GPUs, synchronization becomes necessary to minimize inter-block communication time. In this paper, we have proposed two new inter-block synchronization techniques: 1) Relaxed Synchronization, and 2) Block-Query Synchronization. These schemes are used in implementing numerical iterative solvers where computation/communication overlapping is one used optimization to enhance application performance. We have evaluated and analyzed the performance of the proposed synchronization techniques using Jacobi Iterative Solver in comparison to the state of the art inter-block lock-free synchronization techniques. We have achieved about 1-8% performance improvement in terms of execution time over lock-free synchronization depending on the problem size and the number of thread blocks. We have also evaluated the proposed algorithm on GPU and MIC architectures and obtained about 8-26% performance improvement over the barrier synchronization available in OpenMP programming environment depending on the problem size and number of cores used.

  11. Evaluation of global synchronization for iterative algebra algorithms on many-core

    KAUST Repository

    ul Hasan Khan, Ayaz

    2015-06-01

    © 2015 IEEE. Massively parallel computing is applied extensively in various scientific and engineering domains. With the growing interest in many-core architectures and due to the lack of explicit support for inter-block synchronization specifically in GPUs, synchronization becomes necessary to minimize inter-block communication time. In this paper, we have proposed two new inter-block synchronization techniques: 1) Relaxed Synchronization, and 2) Block-Query Synchronization. These schemes are used in implementing numerical iterative solvers where computation/communication overlapping is one used optimization to enhance application performance. We have evaluated and analyzed the performance of the proposed synchronization techniques using Jacobi Iterative Solver in comparison to the state of the art inter-block lock-free synchronization techniques. We have achieved about 1-8% performance improvement in terms of execution time over lock-free synchronization depending on the problem size and the number of thread blocks. We have also evaluated the proposed algorithm on GPU and MIC architectures and obtained about 8-26% performance improvement over the barrier synchronization available in OpenMP programming environment depending on the problem size and number of cores used.

  12. SU-E-I-93: Improved Imaging Quality for Multislice Helical CT Via Sparsity Regularized Iterative Image Reconstruction Method Based On Tensor Framelet

    International Nuclear Information System (INIS)

    Nam, H; Guo, M; Lee, K; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: Inspired by compressive sensing, sparsity regularized iterative reconstruction method has been extensively studied. However, its utility pertinent to multislice helical 4D CT for radiotherapy with respect to imaging quality, dose, and time has not been thoroughly addressed. As the beginning of such an investigation, this work carries out the initial comparison of reconstructed imaging quality between sparsity regularized iterative method and analytic method through static phantom studies using a state-of-art 128-channel multi-slice Siemens helical CT scanner. Methods: In our iterative method, tensor framelet (TF) is chosen as the regularization method for its superior performance from total variation regularization in terms of reduced piecewise-constant artifacts and improved imaging quality that has been demonstrated in our prior work. On the other hand, X-ray transforms and its adjoints are computed on-the-fly through GPU implementation using our previous developed fast parallel algorithms with O(1) complexity per computing thread. For comparison, both FDK (approximate analytic method) and Katsevich algorithm (exact analytic method) are used for multislice helical CT image reconstruction. Results: The phantom experimental data with different imaging doses were acquired using a state-of-art 128-channel multi-slice Siemens helical CT scanner. The reconstructed image quality was compared between TF-based iterative method, FDK and Katsevich algorithm with the quantitative analysis for characterizing signal-to-noise ratio, image contrast, and spatial resolution of high-contrast and low-contrast objects. Conclusion: The experimental results suggest that our tensor framelet regularized iterative reconstruction algorithm improves the helical CT imaging quality from FDK and Katsevich algorithm for static experimental phantom studies that have been performed

  13. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    International Nuclear Information System (INIS)

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  14. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  15. Distributed 3-D iterative reconstruction for quantitative SPECT

    International Nuclear Information System (INIS)

    Ju, Z.W.; Frey, E.C.; Tsui, B.M.W.

    1995-01-01

    The authors describe a distributed three dimensional (3-D) iterative reconstruction library for quantitative single-photon emission computed tomography (SPECT). This library includes 3-D projector-backprojector pairs (PBPs) and distributed 3-D iterative reconstruction algorithms. The 3-D PBPs accurately and efficiently model various combinations of the image degrading factors including attenuation, detector response and scatter response. These PBPs were validated by comparing projection data computed using the projectors with that from direct Monte Carlo (MC) simulations. The distributed 3-D iterative algorithms spread the projection-backprojection operations for all the projection angles over a heterogeneous network of single or multi-processor computers to reduce the reconstruction time. Based on a master/slave paradigm, these distributed algorithms provide dynamic load balancing and fault tolerance. The distributed algorithms were verified by comparing images reconstructed using both the distributed and non-distributed algorithms. Computation times for distributed 3-D reconstructions running on up to 4 identical processors were reduced by a factor approximately 80--90% times the number of the processors participating, compared to those for non-distributed 3-D reconstructions running on a single processor. When combined with faster affordable computers, this library provides an efficient means for implementing accurate reconstruction and compensation methods to improve quality and quantitative accuracy in SPECT images

  16. An Efficient Topology-Based Algorithm for Transient Analysis of Power Grid

    KAUST Repository

    Yang, Lan

    2015-08-10

    In the design flow of integrated circuits, chip-level verification is an important step that sanity checks the performance is as expected. Power grid verification is one of the most expensive and time-consuming steps of chip-level verification, due to its extremely large size. Efficient power grid analysis technology is highly demanded as it saves computing resources and enables faster iteration. In this paper, a topology-base power grid transient analysis algorithm is proposed. Nodal analysis is adopted to analyze the topology which is mathematically equivalent to iteratively solving a positive semi-definite linear equation. The convergence of the method is proved.

  17. Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data.

    Science.gov (United States)

    Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun

    2017-03-01

    H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.

  18. A fast image encryption algorithm based on chaotic map

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  19. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  20. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  1. Robust Adaptive LCMV Beamformer Based On An Iterative Suboptimal Solution

    Directory of Open Access Journals (Sweden)

    Xiansheng Guo

    2015-06-01

    Full Text Available The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV beamformer is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR, low jammer-to-noise ratios (JNRs and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV using conjugate gradient (CG optimization method. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.

  2. An iterative fast sweeping based eikonal solver for tilted orthorhombic media

    KAUST Repository

    Waheed, Umair bin

    2014-08-01

    Computing first-arrival traveltimes of quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization, and requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher-order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We address this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function is updated to capture the effects of the higher order nonlinear terms. We use Aitken extrapolation to speed up the convergence rate of the iterative algorithm. The result is an algorithm for first-arrival traveltime computations in tilted anisotropic media. We demonstrate our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests demonstrate that the proposed method can match the first arrivals obtained by wavefield extrapolation, even for strong anisotropy and complex structures. Therefore, for the cases where oneor two-point ray tracing fails, our method may be a potential substitute for computing traveltimes. Our approach can be extended to anisotropic media with lower symmetries, such as monoclinic or even triclinic media.

  3. An iterative fast sweeping based eikonal solver for tilted orthorhombic media

    KAUST Repository

    Waheed, Umair bin; Yarman, Can Evren; Flagg, Garret

    2014-01-01

    Computing first-arrival traveltimes of quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization, and requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher-order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We address this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function is updated to capture the effects of the higher order nonlinear terms. We use Aitken extrapolation to speed up the convergence rate of the iterative algorithm. The result is an algorithm for first-arrival traveltime computations in tilted anisotropic media. We demonstrate our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests demonstrate that the proposed method can match the first arrivals obtained by wavefield extrapolation, even for strong anisotropy and complex structures. Therefore, for the cases where oneor two-point ray tracing fails, our method may be a potential substitute for computing traveltimes. Our approach can be extended to anisotropic media with lower symmetries, such as monoclinic or even triclinic media.

  4. Head-to-head comparison of adaptive statistical and model-based iterative reconstruction algorithms for submillisievert coronary CT angiography.

    Science.gov (United States)

    Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R

    2018-02-01

    Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P ASiR (-59% compared with ASiR 100%; P ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  5. Receiver Architectures for MIMO-OFDM Based on a Combined VMP-SP Algorithm

    DEFF Research Database (Denmark)

    Manchón, Carles Navarro; Kirkelund, Gunvor Elisabeth; Riegler, Erwin

    2011-01-01

    , such as the sum-product (SP) and variational message passing (VMP) algorithms, have become increasingly popular. In this contribution, we apply a combined VMP-SP message-passing technique to the design of receivers for MIMO-ODFM systems. The message-passing equations of the combined scheme can be obtained from......Iterative information processing, either based on heuristics or analytical frameworks, has been shown to be a very powerful tool for the design of efficient, yet feasible, wireless receiver architectures. Within this context, algorithms performing message-passing on a probabilistic graph...... assessment of our solutions, based on Monte Carlo simulations, corroborates the high performance of the proposed algorithms and their superiority to heuristic approaches....

  6. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    Science.gov (United States)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  7. Iterative h-minima-based marker-controlled watershed for cell nucleus segmentation.

    Science.gov (United States)

    Koyuncu, Can Fahrettin; Akhan, Ece; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2016-04-01

    Automated microscopy imaging systems facilitate high-throughput screening in molecular cellular biology research. The first step of these systems is cell nucleus segmentation, which has a great impact on the success of the overall system. The marker-controlled watershed is a technique commonly used by the previous studies for nucleus segmentation. These studies define their markers finding regional minima on the intensity/gradient and/or distance transform maps. They typically use the h-minima transform beforehand to suppress noise on these maps. The selection of the h value is critical; unnecessarily small values do not sufficiently suppress the noise, resulting in false and oversegmented markers, and unnecessarily large ones suppress too many pixels, causing missing and undersegmented markers. Because cell nuclei show different characteristics within an image, the same h value may not work to define correct markers for all the nuclei. To address this issue, in this work, we propose a new watershed algorithm that iteratively identifies its markers, considering a set of different h values. In each iteration, the proposed algorithm defines a set of candidates using a particular h value and selects the markers from those candidates provided that they fulfill the size requirement. Working with widefield fluorescence microscopy images, our experiments reveal that the use of multiple h values in our iterative algorithm leads to better segmentation results, compared to its counterparts. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  8. A Reputation-based Distributed District Scheduling Algorithm for Smart Grids

    Directory of Open Access Journals (Sweden)

    D. Borra

    2015-05-01

    Full Text Available In this paper we develop and test a distributed algorithm providing Energy Consumption Schedules (ECS in smart grids for a residential district. The goal is to achieve a given aggregate load prole. The NP-hard constrained optimization problem reduces to a distributed unconstrained formulation by means of Lagrangian Relaxation technique, and a meta-heuristic algorithm based on a Quantum inspired Particle Swarm with Levy flights. A centralized iterative reputation-reward mechanism is proposed for end-users to cooperate to avoid power peaks and reduce global overload, based on random distributions simulating human behaviors and penalties on the eective ECS diering from the suggested ECS. Numerical results show the protocols eectiveness.

  9. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; O’Brien, Ricky T; Keall, Paul; Poulsen, Per Rugaard

    2013-01-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right–left (RL), anterior–posterior (AP) and superior–inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of −0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring

  10. A Design Algorithm using External Perturbation to Improve Iterative Feedback Tuning Convergence

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Hjalmarsson, Håkan; Poulsen, Niels Kjølstad

    2011-01-01

    Iterative Feedback Tuning constitutes an attractive control loop tuning method for processes in the absence of process insight. It is a purely data driven approach for optimization of the loop performance. The standard formulation ensures an unbiased estimate of the loop performance cost function...... gradient, which is used in a search algorithm for minimizing the performance cost. A slow rate of convergence of the tuning method is often experienced when tuning for disturbance rejection. This is due to a poor signal to noise ratio in the process data. A method is proposed for increasing the data...

  11. Finding the magnetic size distribution of magnetic nanoparticles from magnetization measurements via the iterative Kaczmarz algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Daniel, E-mail: frank.wiekhorst@ptb.de; Eberbeck, Dietmar; Steinhoff, Uwe; Wiekhorst, Frank

    2017-06-01

    The characterization of the size distribution of magnetic nanoparticles is an important step for the evaluation of their suitability for many different applications like magnetic hyperthermia, drug targeting or Magnetic Particle Imaging. We present a new method based on the iterative Kaczmarz algorithm that enables the reconstruction of the size distribution from magnetization measurements without a priori knowledge of the distribution form. We show in simulations that the method is capable of very exact reconstructions of a given size distribution and, in that, is highly robust to noise contamination. Moreover, we applied the method on the well characterized FeraSpin™ series and obtained results that were in accordance with literature and boundary conditions based on their synthesis via separation of the original suspension FeraSpin R. It is therefore concluded that this method is a powerful and intuitive tool for reconstructing particle size distributions from magnetization measurements. - Highlights: • A new method for the size distribution fit of magnetic nanoparticles is proposed. • Employed Kaczmarz algorithm does not need a priori input or eigenwert regularization. • The method is highly robust to noise contamination. • Size distributions are reconstructed from simulated and measured magnetization curves.

  12. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...

  13. Iterative algorithms to approximate canonieal Gabor windows: Computational aspects

    DEFF Research Database (Denmark)

    Janssen, A. J. E. M.; Søndergaard, Peter Lempel

    2007-01-01

    In this article we investigate the computational aspects of some recently proposed iterative methods for approximating the canonical tight and canonical dual window of a Gabor frame (g, a, b). The iterations start with the window g while the iteration steps comprise the window g, the k(th) iteran...

  14. A Greedy Algorithm for Neighborhood Overlap-Based Community Detection

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2016-01-01

    Full Text Available The neighborhood overlap (NOVER of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s after the removal of each edge. The network component(s that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN community detection algorithm.

  15. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  16. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  17. Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials

    Directory of Open Access Journals (Sweden)

    Zhao Hongwei

    2017-01-01

    Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.

  18. PARALLEL ITERATIVE RECONSTRUCTION OF PHANTOM CATPHAN ON EXPERIMENTAL DATA

    Directory of Open Access Journals (Sweden)

    M. A. Mirzavand

    2016-01-01

    Full Text Available The principles of fast parallel iterative algorithms based on the use of graphics accelerators and OpenGL library are considered in the paper. The proposed approach provides simultaneous minimization of the residuals of the desired solution and total variation of the reconstructed three- dimensional image. The number of necessary input data, i. e. conical X-ray projections, can be reduced several times. It means in a corresponding number of times the possibility to reduce radiation exposure to the patient. At the same time maintain the necessary contrast and spatial resolution of threedimensional image of the patient. Heuristic iterative algorithm can be used as an alternative to the well-known three-dimensional Feldkamp algorithm.

  19. Iterative Observer-based Estimation Algorithms for Steady-State Elliptic Partial Differential Equation Systems

    KAUST Repository

    Majeed, Muhammad Usman

    2017-01-01

    the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time

  20. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  1. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  2. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    Energy Technology Data Exchange (ETDEWEB)

    Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)

    2015-01-15

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  3. Ant-Based Phylogenetic Reconstruction (ABPR: A new distance algorithm for phylogenetic estimation based on ant colony optimization

    Directory of Open Access Journals (Sweden)

    Karla Vittori

    2008-12-01

    Full Text Available We propose a new distance algorithm for phylogenetic estimation based on Ant Colony Optimization (ACO, named Ant-Based Phylogenetic Reconstruction (ABPR. ABPR joins two taxa iteratively based on evolutionary distance among sequences, while also accounting for the quality of the phylogenetic tree built according to the total length of the tree. Similar to optimization algorithms for phylogenetic estimation, the algorithm allows exploration of a larger set of nearly optimal solutions. We applied the algorithm to four empirical data sets of mitochondrial DNA ranging from 12 to 186 sequences, and from 898 to 16,608 base pairs, and covering taxonomic levels from populations to orders. We show that ABPR performs better than the commonly used Neighbor-Joining algorithm, except when sequences are too closely related (e.g., population-level sequences. The phylogenetic relationships recovered at and above species level by ABPR agree with conventional views. However, like other algorithms of phylogenetic estimation, the proposed algorithm failed to recover expected relationships when distances are too similar or when rates of evolution are very variable, leading to the problem of long-branch attraction. ABPR, as well as other ACO-based algorithms, is emerging as a fast and accurate alternative method of phylogenetic estimation for large data sets.

  4. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials

    Science.gov (United States)

    Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram

    2018-02-01

    Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.

  5. Self-Adaptive Step Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Shuhao Yu

    2013-01-01

    Full Text Available In the standard firefly algorithm, each firefly has the same step settings and its values decrease from iteration to iteration. Therefore, it may fall into the local optimum. Furthermore, the decreasing of step is restrained by the maximum of iteration, which has an influence on the convergence speed and precision. In order to avoid falling into the local optimum and reduce the impact of the maximum of iteration, a self-adaptive step firefly algorithm is proposed in the paper. Its core idea is setting the step of each firefly varying with the iteration, according to each firefly’s historical information and current situation. Experiments are made to show the performance of our approach compared with the standard FA, based on sixteen standard testing benchmark functions. The results reveal that our method can prevent the premature convergence and improve the convergence speed and accurateness.

  6. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  7. A generalization of Takane's algorithm for DEDICOM

    NARCIS (Netherlands)

    Kiers, Henk A.L.; ten Berge, Jos M.F.; Takane, Yoshio; de Leeuw, Jan

    An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the

  8. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  9. Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing

    Science.gov (United States)

    Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng

    2017-05-01

    Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.

  10. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  11. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    International Nuclear Information System (INIS)

    Hu, E; Lasio, G; Yi, B

    2014-01-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases, and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539

  12. Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method

    Directory of Open Access Journals (Sweden)

    Ling-yun Xu

    2012-01-01

    Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.

  13. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    Science.gov (United States)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  14. Nonuniform Sparse Data Clustering Cascade Algorithm Based on Dynamic Cumulative Entropy

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available A small amount of prior knowledge and randomly chosen initial cluster centers have a direct impact on the accuracy of the performance of iterative clustering algorithm. In this paper we propose a new algorithm to compute initial cluster centers for k-means clustering and the best number of the clusters with little prior knowledge and optimize clustering result. It constructs the Euclidean distance control factor based on aggregation density sparse degree to select the initial cluster center of nonuniform sparse data and obtains initial data clusters by multidimensional diffusion density distribution. Multiobjective clustering approach based on dynamic cumulative entropy is adopted to optimize the initial data clusters and the best number of the clusters. The experimental results show that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm and it effectively improves the clustering accuracy of nonuniform sparse data by about 5%.

  15. Iterative Schemes for Convex Minimization Problems with Constraints

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one implicit iterative algorithm for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: the generalized mixed equilibrium problem, the system of generalized equilibrium problems, and finitely many variational inclusions in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another implicit iterative algorithm for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  16. A new column-generation-based algorithm for VMAT treatment plan optimization

    International Nuclear Information System (INIS)

    Peng Fei; Epelman, Marina A; Romeijn, H Edwin; Jia Xun; Gu Xuejun; Jiang, Steve B

    2012-01-01

    We study the treatment plan optimization problem for volumetric modulated arc therapy (VMAT). We propose a new column-generation-based algorithm that takes into account bounds on the gantry speed and dose rate, as well as an upper bound on the rate of change of the gantry speed, in addition to MLC constraints. The algorithm iteratively adds one aperture at each control point along the treatment arc. In each iteration, a restricted problem optimizing intensities at previously selected apertures is solved, and its solution is used to formulate a pricing problem, which selects an aperture at another control point that is compatible with previously selected apertures and leads to the largest rate of improvement in the objective function value of the restricted problem. Once a complete set of apertures is obtained, their intensities are optimized and the gantry speeds and dose rates are adjusted to minimize treatment time while satisfying all machine restrictions. Comparisons of treatment plans obtained by our algorithm to idealized IMRT plans of 177 beams on five clinical prostate cancer cases demonstrate high quality with respect to clinical dose–volume criteria. For all cases, our algorithm yields treatment plans that can be delivered in around 2 min. Implementation on a graphic processing unit enables us to finish the optimization of a VMAT plan in 25–55 s. (paper)

  17. Hybrid Firefly Variants Algorithm for Localization Optimization in WSN

    Directory of Open Access Journals (Sweden)

    P. SrideviPonmalar

    2017-01-01

    Full Text Available Localization is one of the key issues in wireless sensor networks. Several algorithms and techniques have been introduced for localization. Localization is a procedural technique of estimating the sensor node location. In this paper, a novel three hybrid algorithms based on firefly is proposed for localization problem. Hybrid Genetic Algorithm-Firefly Localization Algorithm (GA-FFLA, Hybrid Differential Evolution-Firefly Localization Algorithm (DE-FFLA and Hybrid Particle Swarm Optimization -Firefly Localization Algorithm (PSO-FFLA are analyzed, designed and implemented to optimize the localization error. The localization algorithms are compared based on accuracy of estimation of location, time complexity and iterations required to achieve the accuracy. All the algorithms have hundred percent estimation accuracy but with variations in the number of firefliesr requirements, variation in time complexity and number of iteration requirements. Keywords: Localization; Genetic Algorithm; Differential Evolution; Particle Swarm Optimization

  18. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    Science.gov (United States)

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  19. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    Science.gov (United States)

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in

  20. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  1. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    Science.gov (United States)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  2. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    International Nuclear Information System (INIS)

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-01-01

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  3. Iterative CT reconstruction via minimizing adaptively reweighted total variation.

    Science.gov (United States)

    Zhu, Lei; Niu, Tianye; Petrongolo, Michael

    2014-01-01

    Iterative reconstruction via total variation (TV) minimization has demonstrated great successes in accurate CT imaging from under-sampled projections. When projections are further reduced, over-smoothing artifacts appear in the current reconstruction especially around the structure boundaries. We propose a practical algorithm to improve TV-minimization based CT reconstruction on very few projection data. Based on the theory of compressed sensing, the L-0 norm approach is more desirable to further reduce the projection views. To overcome the computational difficulty of the non-convex optimization of the L-0 norm, we implement an adaptive weighting scheme to approximate the solution via a series of TV minimizations for practical use in CT reconstruction. The weight on TV is initialized as uniform ones, and is automatically changed based on the gradient of the reconstructed image from the previous iteration. The iteration stops when a small difference between the weighted TV values is observed on two consecutive reconstructed images. We evaluate the proposed algorithm on both a digital phantom and a physical phantom. Using 20 equiangular projections, our method reduces reconstruction errors in the conventional TV minimization by a factor of more than 5, with improved spatial resolution. By adaptively reweighting TV in iterative CT reconstruction, we successfully further reduce the projection number for the same or better image quality.

  4. Improvement of image quality of holographic projection on tilted plane using iterative algorithm

    Science.gov (United States)

    Pang, Hui; Cao, Axiu; Wang, Jiazhou; Zhang, Man; Deng, Qiling

    2017-12-01

    Holographic image projection on tilted plane has an important application prospect. In this paper, we propose a method to compute the phase-only hologram that can reconstruct a clear image on tilted plane. By adding a constant phase to the target image of the inclined plane, the corresponding light field distribution on the plane that is parallel to the hologram plane is derived through the titled diffraction calculation. Then the phase distribution of the hologram is obtained by the iterative algorithm with amplitude and phase constrain. Simulation and optical experiment are performed to show the effectiveness of the proposed method.

  5. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis

    Energy Technology Data Exchange (ETDEWEB)

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth [Peninsula Radiology Academy, Plymouth (United Kingdom); Vardhanabhuti, Varut [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); University of Hong Kong, Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, Pokfulam (China); Stuckey, Colin; Gutteridge, Catherine [Plymouth Hospitals NHS Trust, Plymouth (United Kingdom); Hyde, Christopher [University of Exeter Medical School, St Luke' s Campus, Exeter (United Kingdom); Roobottom, Carl [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); Plymouth Hospitals NHS Trust, Plymouth (United Kingdom)

    2017-10-15

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. (orig.)

  6. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    Science.gov (United States)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  7. Evaluation of iterative algorithms for tomography image reconstruction: A study using a third generation industrial tomography system

    Energy Technology Data Exchange (ETDEWEB)

    Velo, Alexandre F.; Carvalho, Diego V.; Alvarez, Alexandre G.; Hamada, Margarida M.; Mesquita, Carlos H., E-mail: afvelo@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-07-01

    The greatest impact of the tomography technology currently occurs in medicine. The success is due to the human body presents standardized dimensions with well-established composition. These conditions are not found in industrial objects. In industry, there is much interest in using the tomography in order to know the inner of (1) the manufactured industrial objects or (2) the machines and their means of production. In these cases, the purpose of the tomography is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and analyzing the quality of the means of production. This scan system is a non-destructive, efficient and fast method for providing sectional images of industrial objects and is able to show the dynamic processes and the dispersion of the materials structures within these objects. In this context, it is important that the reconstructed image presents a great spatial resolution with a satisfactory temporal resolution. Thus the algorithm to reconstruct the images has to meet these requirements. This work consists in the analysis of three different iterative algorithm methods, such Maximum Likelihood Estimation Method (MLEM), Maximum Likelihood Transmitted Method (MLTR) and Simultaneous Iterative Reconstruction Method (SIRT. The analysis consists on measurement of the contrast to noise ratio (CNR), the root mean square error (RMSE) and the Modulation Transfer Function (MTF), to know which algorithm fits better the conditions in order to optimize system. The algorithms and the image quality analysis were performed by the Matlab® 2013b. (author)

  8. Evaluation of iterative algorithms for tomography image reconstruction: A study using a third generation industrial tomography system

    International Nuclear Information System (INIS)

    Velo, Alexandre F.; Carvalho, Diego V.; Alvarez, Alexandre G.; Hamada, Margarida M.; Mesquita, Carlos H.

    2017-01-01

    The greatest impact of the tomography technology currently occurs in medicine. The success is due to the human body presents standardized dimensions with well-established composition. These conditions are not found in industrial objects. In industry, there is much interest in using the tomography in order to know the inner of (1) the manufactured industrial objects or (2) the machines and their means of production. In these cases, the purpose of the tomography is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and analyzing the quality of the means of production. This scan system is a non-destructive, efficient and fast method for providing sectional images of industrial objects and is able to show the dynamic processes and the dispersion of the materials structures within these objects. In this context, it is important that the reconstructed image presents a great spatial resolution with a satisfactory temporal resolution. Thus the algorithm to reconstruct the images has to meet these requirements. This work consists in the analysis of three different iterative algorithm methods, such Maximum Likelihood Estimation Method (MLEM), Maximum Likelihood Transmitted Method (MLTR) and Simultaneous Iterative Reconstruction Method (SIRT. The analysis consists on measurement of the contrast to noise ratio (CNR), the root mean square error (RMSE) and the Modulation Transfer Function (MTF), to know which algorithm fits better the conditions in order to optimize system. The algorithms and the image quality analysis were performed by the Matlab® 2013b. (author)

  9. A New DG Multiobjective Optimization Method Based on an Improved Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2013-01-01

    Full Text Available A distribution generation (DG multiobjective optimization method based on an improved Pareto evolutionary algorithm is investigated in this paper. The improved Pareto evolutionary algorithm, which introduces a penalty factor in the objective function constraints, uses an adaptive crossover and a mutation operator in the evolutionary process and combines a simulated annealing iterative process. The proposed algorithm is utilized to the optimize DG injection models to maximize DG utilization while minimizing system loss and environmental pollution. A revised IEEE 33-bus system with multiple DG units was used to test the multiobjective optimization algorithm in a distribution power system. The proposed algorithm was implemented and compared with the strength Pareto evolutionary algorithm 2 (SPEA2, a particle swarm optimization (PSO algorithm, and nondominated sorting genetic algorithm II (NGSA-II. The comparison of the results demonstrates the validity and practicality of utilizing DG units in terms of economic dispatch and optimal operation in a distribution power system.

  10. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  11. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  12. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  13. FAST ITERATIVE KILOVOLTAGE CONE BEAM TOMOGRAPHY

    Directory of Open Access Journals (Sweden)

    S. A. Zolotarev

    2015-01-01

    Full Text Available Creating a fast parallel iterative tomographic algorithms based on the use of graphics accelerators, which simultaneously provide the minimization of residual and total variation of the reconstructed image is an important and urgent task, which is of great scientific and practical importance. Such algorithms can be used, for example, in the implementation of radiation therapy patients, because it is always done pre-computed tomography of patients in order to better identify areas which can then be subjected to radiation exposure. 

  14. Novel aspects of plasma control in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Humphreys, D.; Jackson, G.; Walker, M.; Welander, A. [General Atomics P.O. Box 85608, San Diego, California 92186-5608 (United States); Ambrosino, G.; Pironti, A. [CREATE/University of Naples Federico II, Napoli (Italy); Vries, P. de; Kim, S. H.; Snipes, J.; Winter, A.; Zabeo, L. [ITER Organization, St. Paul Lez durance Cedex (France); Felici, F. [Eindhoven University of Technology, Eindhoven (Netherlands); Kallenbach, A.; Raupp, G.; Treutterer, W. [Max-Planck Institut für Plasmaphysik, Garching (Germany); Kolemen, E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Lister, J.; Sauter, O. [Centre de Recherches en Physique des Plasmas, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Moreau, D. [CEA, IRFM, 13108 St. Paul-lez Durance (France); Schuster, E. [Lehigh University, Bethlehem, Pennsylvania (United States)

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily for ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.

  15. Hybrid direct and iterative solvers for h refined grids with singularities

    KAUST Repository

    Paszyński, Maciej R.

    2015-04-27

    This paper describes a hybrid direct and iterative solver for two and three dimensional h adaptive grids with point singularities. The point singularities are eliminated by using a sequential linear computational cost solver O(N) on CPU [1]. The remaining Schur complements are submitted to incomplete LU preconditioned conjugated gradient (ILUPCG) iterative solver. The approach is compared to the standard algorithm performing static condensation over the entire mesh and executing the ILUPCG algorithm on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2], and the optimal refinements are selected using the projection based interpolation. The computational mesh is partitioned into sub-meshes with local point and edge singularities separated. This is done by using the following greedy algorithm.

  16. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    Science.gov (United States)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  17. Leapfrog variants of iterative methods for linear algebra equations

    Science.gov (United States)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  18. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    Science.gov (United States)

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  19. Rare itemsets mining algorithm based on RP-Tree and spark framework

    Science.gov (United States)

    Liu, Sainan; Pan, Haoan

    2018-05-01

    For the issues of the rare itemsets mining in big data, this paper proposed a rare itemsets mining algorithm based on RP-Tree and Spark framework. Firstly, it arranged the data vertically according to the transaction identifier, in order to solve the defects of scan the entire data set, the vertical datasets are divided into frequent vertical datasets and rare vertical datasets. Then, it adopted the RP-Tree algorithm to construct the frequent pattern tree that contains rare items and generate rare 1-itemsets. After that, it calculated the support of the itemsets by scanning the two vertical data sets, finally, it used the iterative process to generate rare itemsets. The experimental show that the algorithm can effectively excavate rare itemsets and have great superiority in execution time.

  20. Performance study of LMS based adaptive algorithms for unknown system identification

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Shazia; Ahmad, Noor Atinah [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Penang (Malaysia)

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  1. Performance study of LMS based adaptive algorithms for unknown system identification

    International Nuclear Information System (INIS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-01-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment

  2. Model-based iterative learning control of Parkinsonian state in thalamic relay neuron

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile

    2014-09-01

    Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.

  3. Iterative solution of a nonlinear system arising in phase change problems

    International Nuclear Information System (INIS)

    Williams, M.A.

    1987-01-01

    We consider several iterative methods for solving the nonlinear system arising from an enthalpy formulation of a phase change problem. We present the formulation of the problem. Implicit discretization of the governing equations results in a mildly nonlinear system at each time step. We discuss solving this system using Jacobi, Gauss-Seidel, and SOR iterations and a new modified preconditioned conjugate gradient (MPCG) algorithm. The new MPCG algorithm and its properties are discussed in detail. Numerical results are presented comparing the performance of the SOR algorithm and the MPCG algorithm with 1-step SSOR preconditioning. The MPCG algorithm exhibits a superlinear rate of convergence. The SOR algorithm exhibits a linear rate of convergence. Thus, the MPCG algorithm requires fewer iterations to converge than the SOR algorithm. However in most cases, the SOR algorithm requires less total computation time than the MPCG algorithm. Hence, the SOR algorithm appears to be more appropriate for the class of problems considered. 27 refs., 11 figs

  4. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    Directory of Open Access Journals (Sweden)

    Farahnaz Soleimani

    2015-11-01

    Full Text Available An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished.

  5. Low-memory iterative density fitting.

    Science.gov (United States)

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  6. Nonlinear Microwave Imaging for Breast-Cancer Screening Using Gauss–Newton's Method and the CGLS Inversion Algorithm

    DEFF Research Database (Denmark)

    Rubæk, Tonny; Meaney, P. M.; Meincke, Peter

    2007-01-01

    is presented which is based on the conjugate gradient least squares (CGLS) algorithm. The iterative CGLS algorithm is capable of solving the update problem by operating on just the Jacobian and the regularizing effects of the algorithm can easily be controlled by adjusting the number of iterations. The new...

  7. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    International Nuclear Information System (INIS)

    Barras, Heloise; Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique

  8. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  9. Clinical assessment using an algorithm based on clustering Fuzzy c-means

    NARCIS (Netherlands)

    Guijarro-Rodriguez, A.; Cevallos-Torres, L.; Yepez-Holguin, J.; Botto-Tobar, M.; Valencia-García, R.; Lagos-Ortiz, K.; Alcaraz-Mármol, G.; Del Cioppo, J.; Vera-Lucio, N.; Bucaram-Leverone, M.

    2017-01-01

    The Fuzzy c-means (FCM) algorithms dene a grouping criterion from a function, which seeks to minimize iteratively the function up to an optimal fuzzy partition is obtained. In the execution of this algorithm relates each element to the clusters that were determined in the same n-dimensional space,

  10. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (pvolumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. A dimension decomposition approach based on iterative observer design for an elliptic Cauchy problem

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2015-01-01

    A state observer inspired iterative algorithm is presented to solve boundary estimation problem for Laplace equation using one of the space variables as a time-like variable. Three dimensional domain with two congruent parallel surfaces

  12. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  13. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  14. A novel iterative scheme and its application to differential equations.

    Science.gov (United States)

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  15. Iterative solution of fluid flow in finned tubes

    International Nuclear Information System (INIS)

    Syed, S.K.; Tuphome, E.G.; Wood, S.A.

    2004-01-01

    A difference-based numerical algorithm is developed to efficiently solve a class of elliptic boundary value problems up to any desired order of accuracy. Through multi-level discretization the algorithm uses the multigrid concept of nested iterations to accelerate the convergence rate at higher discretization levels and exploits the advantages of extrapolation methods to achieve higher order accuracy with less computational work. The algorithm employs the SOR method to solve the discrete problem at each discretization level by using an estimated optimum value of the relaxation parameter. The advantages of the algorithm are shown through comparison with the simple discrete method for simulations of fluid flows in finned circular ducts. (author)

  16. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  17. DIII-D Integrated plasma control solutions for ITER and next-generation tokamaks

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Ferron, J.R.; Hyatt, A.W.; La Haye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; In, Y.

    2008-01-01

    Plasma control design approaches and solutions developed at DIII-D to address its control-intensive advanced tokamak (AT) mission are applicable to many problems facing ITER and other next-generation devices. A systematic approach to algorithm design, termed 'integrated plasma control,' enables new tokamak controllers to be applied operationally with minimal machine time required for tuning. Such high confidence plasma control algorithms are designed using relatively simple ('control-level') models validated against experimental response data and are verified in simulation prior to operational use. A key element of DIII-D integrated plasma control, also required in the ITER baseline control approach, is the ability to verify both controller performance and implementation by running simulations that connect directly to the actual plasma control system (PCS) that is used to operate the tokamak itself. The DIII-D PCS comprises a powerful and flexible C-based realtime code and programming infrastructure, as well as an arbitrarily scalable hardware and realtime network architecture. This software infrastructure provides a general platform for implementation and verification of realtime algorithms with arbitrary complexity, limited only by speed of execution requirements. We present a complete suite of tools (known collectively as TokSys) supporting the integrated plasma control design process, along with recent examples of control algorithms designed for the DIII-D PCS. The use of validated physics-based models and a systematic model-based design and verification process enables these control solutions to be directly applied to ITER and other next-generation tokamaks

  18. A superlinear interior points algorithm for engineering design optimization

    Science.gov (United States)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  19. MO-FG-204-04: How Iterative Reconstruction Algorithms Affect the NPS of CT Images

    International Nuclear Information System (INIS)

    Li, G; Liu, X; Dodge, C; Jensen, C; Rong, J

    2015-01-01

    Purpose: To evaluate how the third generation model based iterative reconstruction (MBIR) compares with filtered back-projection (FBP), adaptive statistical iterative reconstruction (ASiR), and the second generation MBIR based on noise power spectrum (NPS) analysis over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP515 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 19mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 0.984 pitch and reconstructed thickness 2.5mm (VEO3.0: Abd/Pelvis with Texture and NR05). At each CTDIvol level, 10 repeated scans were acquired for achieving sufficient data sampling. The images were reconstructed using Standard kernel with FBP; 20%, 40% and 70% ASiR; and two versions of MBIR (VEO2.0 and 3.0). For evaluating the effect of the ROI spatial location to the Result of NPS, 4 ROI groups were categorized based on their distances from the center of the phantom. Results: VEO3.0 performed inferiorly comparing to VEO2.0 over all dose levels. On the other hand, at low dose levels (less than 3 mGy), it clearly outperformed ASiR and FBP, in NPS values. Therefore, the lower the dose level, the relative performance of MBIR improves. However, the shapes of the NPS show substantial differences in horizontal and vertical sampling dimensions. These differences may determine the characteristics of the noise/texture features in images, and hence, play an important role in image interpretation. Conclusion: The third generation MBIR did not improve over the second generation MBIR in term of NPS analysis. The overall performance of both versions of MBIR improved as compared to other reconstruction algorithms when dose was reduced. The shapes of the NPS curves provided additional value for future characterization of the image noise/texture features

  20. MO-FG-204-04: How Iterative Reconstruction Algorithms Affect the NPS of CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Li, G; Liu, X; Dodge, C; Jensen, C; Rong, J [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: To evaluate how the third generation model based iterative reconstruction (MBIR) compares with filtered back-projection (FBP), adaptive statistical iterative reconstruction (ASiR), and the second generation MBIR based on noise power spectrum (NPS) analysis over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP515 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 19mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 0.984 pitch and reconstructed thickness 2.5mm (VEO3.0: Abd/Pelvis with Texture and NR05). At each CTDIvol level, 10 repeated scans were acquired for achieving sufficient data sampling. The images were reconstructed using Standard kernel with FBP; 20%, 40% and 70% ASiR; and two versions of MBIR (VEO2.0 and 3.0). For evaluating the effect of the ROI spatial location to the Result of NPS, 4 ROI groups were categorized based on their distances from the center of the phantom. Results: VEO3.0 performed inferiorly comparing to VEO2.0 over all dose levels. On the other hand, at low dose levels (less than 3 mGy), it clearly outperformed ASiR and FBP, in NPS values. Therefore, the lower the dose level, the relative performance of MBIR improves. However, the shapes of the NPS show substantial differences in horizontal and vertical sampling dimensions. These differences may determine the characteristics of the noise/texture features in images, and hence, play an important role in image interpretation. Conclusion: The third generation MBIR did not improve over the second generation MBIR in term of NPS analysis. The overall performance of both versions of MBIR improved as compared to other reconstruction algorithms when dose was reduced. The shapes of the NPS curves provided additional value for future characterization of the image noise/texture features.

  1. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    Science.gov (United States)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  2. Diagonalization of complex symmetric matrices: Generalized Householder reflections, iterative deflation and implicit shifts

    Science.gov (United States)

    Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.

    2017-12-01

    We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.

  3. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  4. Exponential Lower Bounds For Policy Iteration

    OpenAIRE

    Fearnley, John

    2010-01-01

    We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.

  5. Verifying large modular systems using iterative abstraction refinement

    International Nuclear Information System (INIS)

    Lahtinen, Jussi; Kuismin, Tuomas; Heljanko, Keijo

    2015-01-01

    Digital instrumentation and control (I&C) systems are increasingly used in the nuclear engineering domain. The exhaustive verification of these systems is challenging, and the usual verification methods such as testing and simulation are typically insufficient. Model checking is a formal method that is able to exhaustively analyse the behaviour of a model against a formally written specification. If the model checking tool detects a violation of the specification, it will give out a counter-example that demonstrates how the specification is violated in the system. Unfortunately, sometimes real life system designs are too big to be directly analysed by traditional model checking techniques. We have developed an iterative technique for model checking large modular systems. The technique uses abstraction based over-approximations of the model behaviour, combined with iterative refinement. The main contribution of the work is the concrete abstraction refinement technique based on the modular structure of the model, the dependency graph of the model, and a refinement sampling heuristic similar to delta debugging. The technique is geared towards proving properties, and outperforms BDD-based model checking, the k-induction technique, and the property directed reachability algorithm (PDR) in our experiments. - Highlights: • We have developed an iterative technique for model checking large modular systems. • The technique uses BDD-based model checking, k-induction, and PDR in parallel. • We have tested our algorithm by verifying two models with it. • The technique outperforms classical model checking methods in our experiments

  6. A comparison in the reconstruction of neutron spectrums using classical iterative techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego, E.

    2009-10-01

    One of the key drawbacks to the use of BUNKI code is that the process begins the reconstruction of the spectrum based on a priori knowledge as close as possible to the solution that is sought. The user has to specify the initial spectrum or do it through a subroutine called MAXIET to calculate a Maxwellian and a 1/E spectrum as initial spectrum. Because the application of iterative procedures by to resolve the reconstruction of neutron spectrum needs an initial spectrum, it is necessary to have new proposals for the election of the same. Based on the experience gained with a widely used method of reconstruction, called BUNKI, has developed a new computational tools for neutron spectrometry and dosimetry, which was first introduced, which operates by means of an iterative algorithm for the reconstruction of neutron spectra. The main feature of this tool is that unlike the existing iterative codes, the choice of the initial spectrum is performed automatically by the program, through a neutron spectra catalog. To develop the code, the algorithm was selected as the routine iterative SPUNIT be used in computing tool and response matrix UTA4 for 31 energy groups. (author)

  7. Guidelines for Interactive Reliability-Based Structural Optimization using Quasi-Newton Algorithms

    DEFF Research Database (Denmark)

    Pedersen, C.; Thoft-Christensen, Palle

    increase of the condition number and preserve positive definiteness without discarding previously obtained information. All proposed modifications are also valid for non-interactive optimization problems. Heuristic rules from various optimization problems concerning when and how to impose interactions......Guidelines for interactive reliability-based structural optimization problems are outlined in terms of modifications of standard quasi-Newton algorithms. The proposed modifications minimize the condition number of the approximate Hessian matrix in each iteration, restrict the relative and absolute...

  8. Adaptable Iterative and Recursive Kalman Filter Schemes

    Science.gov (United States)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  9. An iterative, fast-sweeping-based eikonal solver for 3D tilted anisotropic media

    KAUST Repository

    Waheed, Umair bin; Yarman, Can Evren; Flagg, Garret

    2015-01-01

    Computation of first-arrival traveltimes for quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization - and it requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We addressed this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function was updated to capture the effects of the higher order nonlinear terms. We used Aitken's extrapolation to speed up convergence rate of the iterative algorithm. The result is an algorithm for computing first-arrival traveltimes in tilted anisotropic media. We evaluated the applicability and usefulness of our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests determined that the proposed method matches the first arrivals obtained by wavefield extrapolation, even for strongly anisotropic and highly complex subsurface structures. Thus, for the cases where two-point ray tracing fails, our method can be a potential substitute for computing traveltimes. The approach presented here can be easily extended to compute first-arrival traveltimes for anisotropic media with lower symmetries, such as monoclinic or even the triclinic media.

  10. An iterative, fast-sweeping-based eikonal solver for 3D tilted anisotropic media

    KAUST Repository

    Waheed, Umair bin

    2015-03-30

    Computation of first-arrival traveltimes for quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization - and it requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We addressed this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function was updated to capture the effects of the higher order nonlinear terms. We used Aitken\\'s extrapolation to speed up convergence rate of the iterative algorithm. The result is an algorithm for computing first-arrival traveltimes in tilted anisotropic media. We evaluated the applicability and usefulness of our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests determined that the proposed method matches the first arrivals obtained by wavefield extrapolation, even for strongly anisotropic and highly complex subsurface structures. Thus, for the cases where two-point ray tracing fails, our method can be a potential substitute for computing traveltimes. The approach presented here can be easily extended to compute first-arrival traveltimes for anisotropic media with lower symmetries, such as monoclinic or even the triclinic media.

  11. Output Information Based Fault-Tolerant Iterative Learning Control for Dual-Rate Sampling Process with Disturbances and Output Delay

    Directory of Open Access Journals (Sweden)

    Hongfeng Tao

    2018-01-01

    Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.

  12. Inpainting for Fringe Projection Profilometry Based on Geometrically Guided Iterative Regularization.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2015-12-01

    Conventional fringe projection profilometry methods often have difficulty in reconstructing the 3D model of objects when the fringe images have the so-called highlight regions due to strong illumination from nearby light sources. Within a highlight region, the fringe pattern is often overwhelmed by the strong reflected light. Thus, the 3D information of the object, which is originally embedded in the fringe pattern, can no longer be retrieved. In this paper, a novel inpainting algorithm is proposed to restore the fringe images in the presence of highlights. The proposed method first detects the highlight regions based on a Gaussian mixture model. Then, a geometric sketch of the missing fringes is made and used as the initial guess of an iterative regularization procedure for regenerating the missing fringes. The simulation and experimental results show that the proposed algorithm can accurately reconstruct the 3D model of objects even when their fringe images have large highlight regions. It significantly outperforms the traditional approaches in both quantitative and qualitative evaluations.

  13. An Efficient ABC_DE_Based Hybrid Algorithm for Protein–Ligand Docking

    Directory of Open Access Journals (Sweden)

    Boxin Guan

    2018-04-01

    Full Text Available Protein–ligand docking is a process of searching for the optimal binding conformation between the receptor and the ligand. Automated docking plays an important role in drug design, and an efficient search algorithm is needed to tackle the docking problem. To tackle the protein–ligand docking problem more efficiently, An ABC_DE_based hybrid algorithm (ADHDOCK, integrating artificial bee colony (ABC algorithm and differential evolution (DE algorithm, is proposed in the article. ADHDOCK applies an adaptive population partition (APP mechanism to reasonably allocate the computational resources of the population in each iteration process, which helps the novel method make better use of the advantages of ABC and DE. The experiment tested fifty protein–ligand docking problems to compare the performance of ADHDOCK, ABC, DE, Lamarckian genetic algorithm (LGA, running history information guided genetic algorithm (HIGA, and swarm optimization for highly flexible protein–ligand docking (SODOCK. The results clearly exhibit the capability of ADHDOCK toward finding the lowest energy and the smallest root-mean-square deviation (RMSD on most of the protein–ligand docking problems with respect to the other five algorithms.

  14. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  15. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    Science.gov (United States)

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  16. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Ting Wang

    2018-04-01

    Full Text Available In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  17. Discounted semi-Markov decision processes : linear programming and policy iteration

    NARCIS (Netherlands)

    Wessels, J.; van Nunen, J.A.E.E.

    1975-01-01

    For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

  18. Discounted semi-Markov decision processes : linear programming and policy iteration

    NARCIS (Netherlands)

    Wessels, J.; van Nunen, J.A.E.E.

    1974-01-01

    For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

  19. Variability and accuracy of coronary CT angiography including use of iterative reconstruction algorithms for plaque burden assessment as compared with intravascular ultrasound - an ex vivo study

    Energy Technology Data Exchange (ETDEWEB)

    Stolzmann, Paul [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Schlett, Christopher L.; Maurovich-Horvat, Pal; Scheffel, Hans; Engel, Leif-Christopher; Karolyi, Mihaly; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); Maehara, Akiko; Ma, Shixin; Mintz, Gary S. [Columbia University Medical Center, Cardiovascular Research Foundation, New York, NY (United States)

    2012-10-15

    To systematically assess inter-technique and inter-/intra-reader variability of coronary CT angiography (CTA) to measure plaque burden compared with intravascular ultrasound (IVUS) and to determine whether iterative reconstruction algorithms affect variability. IVUS and CTA data were acquired from nine human coronary arteries ex vivo. CT images were reconstructed using filtered back projection (FBPR) and iterative reconstruction algorithms: adaptive-statistical (ASIR) and model-based (MBIR). After co-registration of 284 cross-sections between IVUS and CTA, two readers manually delineated the cross-sectional plaque area in all images presented in random order. Average plaque burden by IVUS was 63.7 {+-} 10.7% and correlated significantly with all CTA measurements (r = 0.45-0.52; P < 0.001), while CTA overestimated the burden by 10 {+-} 10%. There were no significant differences among FBPR, ASIR and MBIR (P > 0.05). Increased overestimation was associated with smaller plaques, eccentricity and calcification (P < 0.001). Reproducibility of plaque burden by CTA and IVUS datasets was excellent with a low mean intra-/inter-reader variability of <1/<4% for CTA and <0.5/<1% for IVUS respectively (P < 0.05) with no significant difference between CT reconstruction algorithms (P > 0.05). In ex vivo coronary arteries, plaque burden by coronary CTA had extremely low inter-/intra-reader variability and correlated significantly with IVUS measurements. Accuracy as well as reader reliability were independent of CT image reconstruction algorithm. (orig.)

  20. Local fractional variational iteration algorithm II for non-homogeneous model associated with the non-differentiable heat flow

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2015-10-01

    Full Text Available In this article, we begin with the non-homogeneous model for the non-differentiable heat flow, which is described using the local fractional vector calculus, from the first law of thermodynamics in fractal media point view. We employ the local fractional variational iteration algorithm II to solve the fractal heat equations. The obtained results show the non-differentiable behaviors of temperature fields of fractal heat flow defined on Cantor sets.

  1. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  2. Virtual fringe projection system with nonparallel illumination based on iteration

    International Nuclear Information System (INIS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-01-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements. (paper)

  3. New algorithms for the symmetric tridiagonal eigenvalue computation

    Energy Technology Data Exchange (ETDEWEB)

    Pan, V. [City Univ. of New York, Bronx, NY (United States)]|[International Computer Sciences Institute, Berkeley, CA (United States)

    1994-12-31

    The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.

  4. Direct and iterative algorithms for the parallel solution of the one-dimensional macroscopic Navier-Stokes equations

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1986-01-01

    Current efforts are under way to develop and evaluate numerical algorithms for the parallel solution of the large sparse matrix equations associated with the finite difference representation of the macroscopic Navier-Stokes equations. Previous work has shown that these equations can be cast into smaller coupled matrix equations suitable for solution utilizing multiple computer processors operating in parallel. The individual processors themselves may exhibit parallelism through the use of vector pipelines. This wor, has concentrated on the one-dimensional drift flux form of the Navier-Stokes equations. Direct and iterative algorithms that may be suitable for implementation on parallel computer architectures are evaluated in terms of accuracy and overall execution speed. This work has application to engineering and training simulations, on-line process control systems, and engineering workstations where increased computational speeds are required

  5. A Line-Based Adaptive-Weight Matching Algorithm Using Loopy Belief Propagation

    Directory of Open Access Journals (Sweden)

    Hui Li

    2015-01-01

    Full Text Available In traditional adaptive-weight stereo matching, the rectangular shaped support region requires excess memory consumption and time. We propose a novel line-based stereo matching algorithm for obtaining a more accurate disparity map with low computation complexity. This algorithm can be divided into two steps: disparity map initialization and disparity map refinement. In the initialization step, a new adaptive-weight model based on the linear support region is put forward for cost aggregation. In this model, the neural network is used to evaluate the spatial proximity, and the mean-shift segmentation method is used to improve the accuracy of color similarity; the Birchfield pixel dissimilarity function and the census transform are adopted to establish the dissimilarity measurement function. Then the initial disparity map is obtained by loopy belief propagation. In the refinement step, the disparity map is optimized by iterative left-right consistency checking method and segmentation voting method. The parameter values involved in this algorithm are determined with many simulation experiments to further improve the matching effect. Simulation results indicate that this new matching method performs well on standard stereo benchmarks and running time of our algorithm is remarkably lower than that of algorithm with rectangle-shaped support region.

  6. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    Science.gov (United States)

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  7. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  8. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    International Nuclear Information System (INIS)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-01-01

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose 4 and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose 4 at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility

  9. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin, E-mail: khj.snuh@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Park, Chang Min, E-mail: cmpark@radiol.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Song, Yong Sub, E-mail: terasong@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Lee, Sang Min, E-mail: sangmin.lee.md@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Goo, Jin Mo, E-mail: jmgoo@plaza.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of)

    2014-05-15

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose{sup 4} and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose{sup 4} at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility.

  10. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    Science.gov (United States)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon

  11. Adaptive algorithms for a self-shielding wavelet-based Galerkin method

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.

    2009-01-01

    The treatment of the energy variable in deterministic neutron transport methods is based on a multigroup discretization, considering the flux and cross-sections to be constant within a group. In this case, a self-shielding calculation is mandatory to correct sections of resonant isotopes. In this paper, a different approach based on a finite element discretization on a wavelet basis is used. We propose adaptive algorithms constructed from error estimates. Such an approach is applied to within-group scattering source iterations. A first implementation is presented in the special case of the fine structure equation for an infinite homogeneous medium. Extension to spatially-dependent cases is discussed. (authors)

  12. Low contrast detectability and spatial resolution with model-based iterative reconstructions of MDCT images: a phantom and cadaveric study

    Energy Technology Data Exchange (ETDEWEB)

    Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)

    2017-03-15

    To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)

  13. Preconditioned iterations to calculate extreme eigenvalues

    Energy Technology Data Exchange (ETDEWEB)

    Brand, C.W.; Petrova, S. [Institut fuer Angewandte Mathematik, Leoben (Austria)

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  14. Low Complexity Tree Searching-Based Iterative Precoding Techniques for Multiuser MIMO Broadcast Channel

    Science.gov (United States)

    Cha, Jongsub; Park, Kyungho; Kang, Joonhyuk; Park, Hyuncheol

    In this letter, we propose two computationally efficient precoding algorithms that achieve near-ML performance for multiuser MIMO downlink. The proposed algorithms perform tree expansion after lattice reduction. The first full expansion is tried by selecting the first level node with a minimum metric, constituting a reference metric. To find an optimal sequence, they iteratively visit each node and terminate the expansion by comparing node metrics with the calculated reference metric. By doing this, they significantly reduce the number of undesirable node visit. Monte-Carlo simulations show that both proposed algorithms yield near-ML performance with considerable reduction in complexity compared with that of the conventional schemes such as sphere encoding.

  15. A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

    OpenAIRE

    Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong

    2013-01-01

    We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...

  16. A parallel algorithm for the non-symmetric eigenvalue problem

    International Nuclear Information System (INIS)

    Sidani, M.M.

    1991-01-01

    An algorithm is presented for the solution of the non-symmetric eigenvalue problem. The algorithm is based on a divide-and-conquer procedure that provides initial approximations to the eigenpairs, which are then refined using Newton iterations. Since the smaller subproblems can be solved independently, and since Newton iterations with different initial guesses can be started simultaneously, the algorithm - unlike the standard QR method - is ideal for parallel computers. The author also reports on his investigation of deflation methods designed to obtain further eigenpairs if needed. Numerical results from implementations on a host of parallel machines (distributed and shared-memory) are presented

  17. Iterative and range test methods for an inverse source problem for acoustic waves

    International Nuclear Information System (INIS)

    Alves, Carlos; Kress, Rainer; Serranho, Pedro

    2009-01-01

    We propose two methods for solving an inverse source problem for time-harmonic acoustic waves. Based on the reciprocity gap principle a nonlinear equation is presented for the locations and intensities of the point sources that can be solved via Newton iterations. To provide an initial guess for this iteration we suggest a range test algorithm for approximating the source locations. We give a mathematical foundation for the range test and exhibit its feasibility in connection with the iteration method by some numerical examples

  18. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  19. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  20. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    Science.gov (United States)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  1. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  2. Mahalanobis Distance Based Iterative Closest Point

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Blas, Morten Rufus; Larsen, Rasmus

    2007-01-01

    the notion of a mahalanobis distance map upon a point set with associated covariance matrices which in addition to providing correlation weighted distance implicitly provides a method for assigning correspondence during alignment. This distance map provides an easy formulation of the ICP problem that permits...... a fast optimization. Initially, the covariance matrices are set to the identity matrix, and all shapes are aligned to a randomly selected shape (equivalent to standard ICP). From this point the algorithm iterates between the steps: (a) obtain mean shape and new estimates of the covariance matrices from...... the aligned shapes, (b) align shapes to the mean shape. Three different methods for estimating the mean shape with associated covariance matrices are explored in the paper. The proposed methods are validated experimentally on two separate datasets (IMM face dataset and femur-bones). The superiority of ICP...

  3. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  4. A quadratic approximation-based algorithm for the solution of multiparametric mixed-integer nonlinear programming problems

    KAUST Repository

    Domí nguez, Luis F.; Pistikopoulos, Efstratios N.

    2012-01-01

    An algorithm for the solution of convex multiparametric mixed-integer nonlinear programming problems arising in process engineering problems under uncertainty is introduced. The proposed algorithm iterates between a multiparametric nonlinear

  5. Approximate convex hull of affine iterated function system attractors

    International Nuclear Information System (INIS)

    Mishkinis, Anton; Gentil, Christian; Lanquetin, Sandrine; Sokolov, Dmitry

    2012-01-01

    Highlights: ► We present an iterative algorithm to approximate affine IFS attractor convex hull. ► Elimination of the interior points significantly reduces the complexity. ► To optimize calculations, we merge the convex hull images at each iteration. ► Approximation by ellipses increases speed of convergence to the exact convex hull. ► We present a method of the output convex hull simplification. - Abstract: In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output approximate convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximate convex hull without loss of accuracy.

  6. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom

    International Nuclear Information System (INIS)

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One; Ha, Seongmin

    2016-01-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose 4 , levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose 4 levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose 4 level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose 4 obtained at 1.81 mSv. (orig.)

  7. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom.

    Science.gov (United States)

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One

    2016-03-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.

  8. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    Science.gov (United States)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  9. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    Science.gov (United States)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  10. Photonic circuits for iterative decoding of a class of low-density parity-check codes

    International Nuclear Information System (INIS)

    Pavlichin, Dmitri S; Mabuchi, Hideo

    2014-01-01

    Photonic circuits in which stateful components are coupled via guided electromagnetic fields are natural candidates for resource-efficient implementation of iterative stochastic algorithms based on propagation of information around a graph. Conversely, such message=passing algorithms suggest novel circuit architectures for signal processing and computation that are well matched to nanophotonic device physics. Here, we construct and analyze a quantum optical model of a photonic circuit for iterative decoding of a class of low-density parity-check (LDPC) codes called expander codes. Our circuit can be understood as an open quantum system whose autonomous dynamics map straightforwardly onto the subroutines of an LDPC decoding scheme, with several attractive features: it can operate in the ultra-low power regime of photonics in which quantum fluctuations become significant, it is robust to noise and component imperfections, it achieves comparable performance to known iterative algorithms for this class of codes, and it provides an instructive example of how nanophotonic cavity quantum electrodynamic components can enable useful new information technology even if the solid-state qubits on which they are based are heavily dephased and cannot support large-scale entanglement. (paper)

  11. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  12. Performance evaluation of an algorithm for fast optimization of beam weights in anatomy-based intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Ranganathan, Vaitheeswaran; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Gupta, Kamlesh K.; Basu, Sumit; Maiya, Vikram; Joseph, Jolly; Nirhali, Amit

    2010-01-01

    This study aims to evaluate the performance of a new algorithm for optimization of beam weights in anatomy-based intensity modulated radiotherapy (IMRT). The algorithm uses a numerical technique called Gaussian-Elimination that derives the optimum beam weights in an exact or non-iterative way. The distinct feature of the algorithm is that it takes only fraction of a second to optimize the beam weights, irrespective of the complexity of the given case. The algorithm has been implemented using MATLAB with a Graphical User Interface (GUI) option for convenient specification of dose constraints and penalties to different structures. We have tested the numerical and clinical capabilities of the proposed algorithm in several patient cases in comparison with KonRad inverse planning system. The comparative analysis shows that the algorithm can generate anatomy-based IMRT plans with about 50% reduction in number of MUs and 60% reduction in number of apertures, while producing dose distribution comparable to that of beamlet-based IMRT plans. Hence, it is clearly evident from the study that the proposed algorithm can be effectively used for clinical applications. (author)

  13. A Dynamic Neighborhood Learning-Based Gravitational Search Algorithm.

    Science.gov (United States)

    Zhang, Aizhu; Sun, Genyun; Ren, Jinchang; Li, Xiaodong; Wang, Zhenjie; Jia, Xiuping

    2018-01-01

    Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named , which stores those superior agents after fitness sorting in each iteration. Since the global property of remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA.

  14. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  15. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms

  16. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    International Nuclear Information System (INIS)

    Gao, H

    2016-01-01

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).

  17. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    Science.gov (United States)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  18. Application of the perturbation iteration method to boundary layer type problems.

    Science.gov (United States)

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.

  19. Three dimensional iterative beam propagation method for optical waveguide devices

    Science.gov (United States)

    Ma, Changbao; Van Keuren, Edward

    2006-10-01

    The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.

  20. An Improved Phase Gradient Autofocus Algorithm Used in Real-time Processing

    Directory of Open Access Journals (Sweden)

    Qing Ji-ming

    2015-10-01

    Full Text Available The Phase Gradient Autofocus (PGA algorithm can remove the high order phase error effectively, which is of great significance to get high resolution images in real-time processing. While PGA usually needs iteration, which necessitates long working hours. In addition, the performances of the algorithm are not stable in different scene applications. This severely constrains the application of PGA in real-time processing. Isolated scatter selection and windowing are two important algorithmic steps of Phase Gradient Autofocus Algorithm. Therefore, this paper presents an isolated scatter selection method based on sample mean and a windowing method based on pulse envelope. These two methods are highly adaptable to data, which would make the algorithm obtain better stability and need less iteration. The adaptability of the improved PGA is demonstrated with the experimental results of real radar data.

  1. Incoherent beam combining based on the momentum SPGD algorithm

    Science.gov (United States)

    Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng

    2018-05-01

    Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.

  2. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    Science.gov (United States)

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  4. Low Complexity V-BLAST MIMO-OFDM Detector by Successive Iterations Reduction

    Directory of Open Access Journals (Sweden)

    AHMED, K.

    2015-02-01

    Full Text Available V-BLAST detection method suffers large computational complexity due to its successive detection of symbols. In this paper, we propose a modified V-BLAST algorithm to decrease the computational complexity by reducing the number of detection iterations required in MIMO communication systems. We begin by showing the existence of a maximum number of iterations, beyond which, no significant improvement is obtained. We establish a criterion for the number of maximum effective iterations. We propose a modified algorithm that uses the measured SNR to dynamically set the number of iterations to achieve an acceptable bit-error rate. Then, we replace the feedback algorithm with an approximate linear function to reduce the complexity. Simulations show that significant reduction in computational complexity is achieved compared to the ordinary V-BLAST, while maintaining a good BER performance.

  5. A New Multi-Step Iterative Algorithm for Approximating Common Fixed Points of a Finite Family of Multi-Valued Bregman Relatively Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Wiyada Kumam

    2016-05-01

    Full Text Available In this article, we introduce a new multi-step iteration for approximating a common fixed point of a finite class of multi-valued Bregman relatively nonexpansive mappings in the setting of reflexive Banach spaces. We prove a strong convergence theorem for the proposed iterative algorithm under certain hypotheses. Additionally, we also use our results for the solution of variational inequality problems and to find the zero points of maximal monotone operators. The theorems furnished in this work are new and well-established and generalize many well-known recent research works in this field.

  6. An Iterative Load Disaggregation Approach Based on Appliance Consumption Pattern

    Directory of Open Access Journals (Sweden)

    Huijuan Wang

    2018-04-01

    Full Text Available Non-intrusive load monitoring (NILM, monitoring single-appliance consumption level by decomposing the aggregated energy consumption, is a novel and economic technology that is beneficial to energy utilities and energy demand management strategies development. Hardware costs of high-frequency sampling and algorithm’s computational complexity hampered NILM large-scale application. However, low sampling data shows poor performance in event detection when multiple appliances are simultaneously turned on. In this paper, we contribute an iterative disaggregation approach that is based on appliance consumption pattern (ILDACP. Our approach combined Fuzzy C-means clustering algorithm, which provide an initial appliance operating status, and sub-sequence searching Dynamic Time Warping, which retrieves single energy consumption based on the typical power consumption pattern. Results show that the proposed approach is effective to accurately disaggregate power consumption, and is suitable for the situation where different appliances are simultaneously operated. Also, the approach has lower computational complexity than Hidden Markov Model method and it is easy to implement in the household without installing special equipment.

  7. First-order convex feasibility algorithms for x-ray CT

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times...... problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited...

  8. Low complexity variational bayes iterative reviver for MIMO-OFDM systems

    DEFF Research Database (Denmark)

    Xiong, Chunlin; Wang, Hua; Zhang, Xiaoying

    2009-01-01

    A low complexity iterative receiver is proposed in this paper for MIMO-OFDM systems in time-varying multi-path channel based on the variational Bayes (VB) method. According to the VB method, the estimation algorithms of the signal distribution and the channel distribution are derived for the rece...

  9. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    Science.gov (United States)

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  10. NITSOL: A Newton iterative solver for nonlinear systems

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)

    1996-12-31

    Newton iterative methods, also known as truncated Newton methods, are implementations of Newton`s method in which the linear systems that characterize Newton steps are solved approximately using iterative linear algebra methods. Here, we outline a well-developed Newton iterative algorithm together with a Fortran implementation called NITSOL. The basic algorithm is an inexact Newton method globalized by backtracking, in which each initial trial step is determined by applying an iterative linear solver until an inexact Newton criterion is satisfied. In the implementation, the user can specify inexact Newton criteria in several ways and select an iterative linear solver from among several popular {open_quotes}transpose-free{close_quotes} Krylov subspace methods. Jacobian-vector products used by the Krylov solver can be either evaluated analytically with a user-supplied routine or approximated using finite differences of function values. A flexible interface permits a wide variety of preconditioning strategies and allows the user to define a preconditioner and optionally update it periodically. We give details of these and other features and demonstrate the performance of the implementation on a representative set of test problems.

  11. Determination of quantitative tissue composition by iterative reconstruction on 3D DECT volumes

    Energy Technology Data Exchange (ETDEWEB)

    Magnusson, Maria [Linkoeping Univ. (Sweden). Dept. of Electrical Engineering; Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV); Malusek, Alexandr [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV); Nuclear Physics Institute AS CR, Prague (Czech Republic). Dept. of Radiation Dosimetry; Muhammad, Arif [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Carlsson, Gudrun Alm [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV)

    2011-07-01

    Quantitative tissue classification using dual-energy CT has the potential to improve accuracy in radiation therapy dose planning as it provides more information about material composition of scanned objects than the currently used methods based on single-energy CT. One problem that hinders successful application of both single- and dual-energy CT is the presence of beam hardening and scatter artifacts in reconstructed data. Current pre- and post-correction methods used for image reconstruction often bias CT attenuation values and thus limit their applicability for quantitative tissue classification. Here we demonstrate simulation studies with a novel iterative algorithm that decomposes every soft tissue voxel into three base materials: water, protein, and adipose. The results demonstrate that beam hardening artifacts can effectively be removed and accurate estimation of mass fractions of each base material can be achieved. Our iterative algorithm starts with calculating parallel projections on two previously reconstructed DECT volumes reconstructed from fan-beam or helical projections with small conebeam angle. The parallel projections are then used in an iterative loop. Future developments include segmentation of soft and bone tissue and subsequent determination of bone composition. (orig.)

  12. Iterative Decoding of Concatenated Codes: A Tutorial

    Directory of Open Access Journals (Sweden)

    Phillip A. Regalia

    2005-05-01

    Full Text Available The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the “turbo principle” remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a “consensus” property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  13. Dose reduction in pediatric abdominal CT: use of iterative reconstruction techniques across different CT platforms

    International Nuclear Information System (INIS)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S.

    2015-01-01

    Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)

  14. Dose reduction in pediatric abdominal CT: use of iterative reconstruction techniques across different CT platforms

    Energy Technology Data Exchange (ETDEWEB)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S. [MGH Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)

    2015-07-15

    Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)

  15. Iterative Estimation in Turbo Equalization Process

    Directory of Open Access Journals (Sweden)

    MORGOS Lucian

    2014-05-01

    Full Text Available This paper presents the iterative estimation in turbo equalization process. Turbo equalization is the process of reception in which equalization and decoding are done together, not as separate processes. For the equalizer to work properly, it must receive before equalization accurate information about the value of the channel impulse response. This estimation of channel impulse response is done by transmission of a training sequence known at reception. Knowing both the transmitted and received sequence, it can be calculated estimated value of the estimated the channel impulse response using one of the well-known estimation algorithms. The estimated value can be also iterative recalculated based on the sequence data available at the output of the channel and estimated sequence data coming from turbo equalizer output, thereby refining the obtained results.

  16. Prototype Implementation of Two Efficient Low-Complexity Digital Predistortion Algorithms

    Directory of Open Access Journals (Sweden)

    Timo I. Laakso

    2008-01-01

    Full Text Available Predistortion (PD lineariser for microwave power amplifiers (PAs is an important topic of research. With larger and larger bandwidth as it appears today in modern WiMax standards as well as in multichannel base stations for 3GPP standards, the relatively simple nonlinear effect of a PA becomes a complex memory-including function, severely distorting the output signal. In this contribution, two digital PD algorithms are investigated for the linearisation of microwave PAs in mobile communications. The first one is an efficient and low-complexity algorithm based on a memoryless model, called the simplicial canonical piecewise linear (SCPWL function that describes the static nonlinear characteristic of the PA. The second algorithm is more general, approximating the pre-inverse filter of a nonlinear PA iteratively using a Volterra model. The first simpler algorithm is suitable for compensation of amplitude compression and amplitude-to-phase conversion, for example, in mobile units with relatively small bandwidths. The second algorithm can be used to linearise PAs operating with larger bandwidths, thus exhibiting memory effects, for example, in multichannel base stations. A measurement testbed which includes a transmitter-receiver chain with a microwave PA is built for testing and prototyping of the proposed PD algorithms. In the testing phase, the PD algorithms are implemented using MATLAB (floating-point representation and tested in record-and-playback mode. The iterative PD algorithm is then implemented on a Field Programmable Gate Array (FPGA using fixed-point representation. The FPGA implementation allows the pre-inverse filter to be tested in a real-time mode. Measurement results show excellent linearisation capabilities of both the proposed algorithms in terms of adjacent channel power suppression. It is also shown that the fixed-point FPGA implementation of the iterative algorithm performs as well as the floating-point implementation.

  17. Combined algorithms in nonlinear problems of magnetostatics

    International Nuclear Information System (INIS)

    Gregus, M.; Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.

    1988-01-01

    To solve boundary problems of magnetostatics in unbounded two- and three-dimensional regions, we construct combined algorithms based on a combination of the method of boundary integral equations with the grid methods. We study the question of substantiation of the combined method of nonlinear magnetostatic problem without the preliminary discretization of equations and give some results on the convergence of iterative processes that arise in non-linear cases. We also discuss economical iterative processes and algorithms that solve boundary integral equations on certain surfaces. Finally, examples of numerical solutions of magnetostatic problems that arose when modelling the fields of electrophysical installations are given too. 14 refs.; 2 figs.; 1 tab

  18. Research of beam hardening correction method for CL system based on SART algorithm

    International Nuclear Information System (INIS)

    Cao Daquan; Wang Yaxiao; Que Jiemin; Sun Cuili; Wei Cunfeng; Wei Long

    2014-01-01

    Computed laminography (CL) is a non-destructive testing technique for large objects, especially for planar objects. Beam hardening artifacts were wildly observed in the CL system and significantly reduce the image quality. This study proposed a novel simultaneous algebraic reconstruction technique (SART) based beam hardening correction (BHC) method for the CL system, namely the SART-BHC algorithm in short. The SART-BHC algorithm took the polychromatic attenuation process in account to formulate the iterative reconstruction update. A novel projection matrix calculation method which was different from the conventional cone-beam or fan-beam geometry was also studied for the CL system. The proposed method was evaluated with simulation data and experimental data, which was generated using the Monte Carlo simulation toolkit Geant4 and a bench-top CL system, respectively. All projection data were reconstructed with SART-BHC algorithm and the standard filtered back projection (FBP) algorithm. The reconstructed images show that beam hardening artifacts are greatly reduced with the SART-BHC algorithm compared to the FBP algorithm. The SART-BHC algorithm doesn't need any prior know-ledge about the object or the X-ray spectrum and it can also mitigate the interlayer aliasing. (authors)

  19. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    Science.gov (United States)

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose

  20. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    Science.gov (United States)

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  1. MICADO: Parallel implementation of a 2D-1D iterative algorithm for the 3D neutron transport problem in prismatic geometries

    International Nuclear Information System (INIS)

    Fevotte, F.; Lathuiliere, B.

    2013-01-01

    The large increase in computing power over the past few years now makes it possible to consider developing 3D full-core heterogeneous deterministic neutron transport solvers for reference calculations. Among all approaches presented in the literature, the method first introduced in [1] seems very promising. It consists in iterating over resolutions of 2D and ID MOC problems by taking advantage of prismatic geometries without introducing approximations of a low order operator such as diffusion. However, before developing a solver with all industrial options at EDF, several points needed to be clarified. In this work, we first prove the convergence of this iterative process, under some assumptions. We then present our high-performance, parallel implementation of this algorithm in the MICADO solver. Benchmarking the solver against the Takeda case shows that the 2D-1D coupling algorithm does not seem to affect the spatial convergence order of the MOC solver. As for performance issues, our study shows that even though the data distribution is suited to the 2D solver part, the efficiency of the ID part is sufficient to ensure a good parallel efficiency of the global algorithm. After this study, the main remaining difficulty implementation-wise is about the memory requirement of a vector used for initialization. An efficient acceleration operator will also need to be developed. (authors)

  2. Feasibility of low-dose CT with model-based iterative image reconstruction in follow-up of patients with testicular cancer

    International Nuclear Information System (INIS)

    Murphy, Kevin P.; Crush, Lee; O’Neill, Siobhan B.; Foody, James; Breen, Micheál; Brady, Adrian; Kelly, Paul J.; Power, Derek G.; Sweeney, Paul; Bye, Jackie; O’Connor, Owen J.; Maher, Michael M.; O’Regan, Kevin N.

    2016-01-01

    •Radiologists should endeavour to minimise radiation exposure to patients with testicular cancer.•Iterative reconstruction algorithms permit CT imaging at lower radiation doses.•Image quality for reduced-dose CT–MBIR is at least comparable to conventional dose.•No loss of diagnostic accuracy apparent with reduced-dose CT–MBIR. Radiologists should endeavour to minimise radiation exposure to patients with testicular cancer. Iterative reconstruction algorithms permit CT imaging at lower radiation doses. Image quality for reduced-dose CT–MBIR is at least comparable to conventional dose. No loss of diagnostic accuracy apparent with reduced-dose CT–MBIR. We examine the performance of pure model-based iterative reconstruction with reduced-dose CT in follow-up of patients with early-stage testicular cancer. Sixteen patients (mean age 35.6 ± 7.4 years) with stage I or II testicular cancer underwent conventional dose (CD) and low-dose (LD) CT acquisition during CT surveillance. LD data was reconstructed with model-based iterative reconstruction (LD–MBIR). Datasets were objectively and subjectively analysed at 8 anatomical levels. Two blinded clinical reads were compared to gold-standard assessment for diagnostic accuracy. Mean radiation dose reduction of 67.1% was recorded. Mean dose measurements for LD–MBIR were: thorax – 66 ± 11 mGy cm (DLP), 1.0 ± 0.2 mSv (ED), 2.0 ± 0.4 mGy (SSDE); abdominopelvic – 128 ± 38 mGy cm (DLP), 1.9 ± 0.6 mSv (ED), 3.0 ± 0.6 mGy (SSDE). Objective noise and signal-to-noise ratio values were comparable between the CD and LD–MBIR images. LD–MBIR images were superior (p < 0.001) with regard to subjective noise, streak artefact, 2-plane contrast resolution, 2-plane spatial resolution and diagnostic acceptability. All patients were correctly categorised as positive, indeterminate or negative for metastatic disease by 2 readers on LD–MBIR and CD datasets. MBIR facilitated a 67% reduction in radiation dose whilst

  3. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  4. Fast Multi-Symbol Based Iterative Detectors for UWB Communications

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2010-01-01

    Full Text Available Ultra-wideband (UWB impulse radios have shown great potential in wireless local area networks for localization, coexistence with other services, and low probability of interception and detection. However, low transmission power and high multipath effect make the detection of UWB signals challenging. Recently, multi-symbol based detection has caught attention for UWB communications because it provides good performance and does not require explicit channel estimation. Most of the existing multi-symbol based methods incur a higher computational cost than can be afforded in the envisioned UWB systems. In this paper, we propose an iterative multi-symbol based method that has low complexity and provides near optimal performance. Our method uses only one initial symbol to start and applies a decision directed approach to iteratively update a filter template and information symbols. Simulations show that our method converges in only a few iterations (less than 5, and that when the number of symbols increases, the performance of our method approaches that of the ideal Rake receiver.

  5. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyu [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 and Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Barnhart, Huiman [Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina 27705 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 and Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Robins, Marthony [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Colsher, James [Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Samei, Ehsan [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Department of Physics, Department of Biomedical Engineering, and Department of Electronic and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2013-11-15

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  6. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    International Nuclear Information System (INIS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-01-01

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  7. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  8. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  9. Convergence properties of iterative algorithms for solving the nodal diffusion equations

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    We drive the five point form of the nodal diffusion equations in two-dimensional Cartesian geometry and develop three iterative schemes to solve the discrete-variable equations: the unaccelerated, partial Successive Over Relaxation (SOR), and the full SOR methods. By decomposing the iteration error into its Fourier modes, we determine the spectral radius of each method for infinite medium, uniform model problems, and for the unaccelerated and partial SOR methods for finite medium, uniform model problems. Also for the two variants of the SOR method we determine the optimal relaxation factor that results in the smallest number of iterations required for convergence. Our results indicate that the number of iterations for the unaccelerated and partial SOR methods is second order in the number of nodes per dimension, while, for the full SOR this behavior is first order, resulting in much faster convergence for very large problems. We successfully verify the results of the spectral analysis against those of numerical experiments, and we show that for the full SOR method the linear dependence of the number of iterations on the number of nodes per dimension is relatively insensitive to the value of the relaxation parameter, and that it remains linear even for heterogenous problems. 14 refs., 1 fig

  10. A Multiuser Detector Based on Artificial Bee Colony Algorithm for DS-UWB Systems

    Directory of Open Access Journals (Sweden)

    Zhendong Yin

    2013-01-01

    Full Text Available Artificial Bee Colony (ABC algorithm is an optimization algorithm based on the intelligent behavior of honey bee swarm. The ABC algorithm was developed to solve optimizing numerical problems and revealed premising results in processing time and solution quality. In ABC, a colony of artificial bees search for rich artificial food sources; the optimizing numerical problems are converted to the problem of finding the best parameter which minimizes an objective function. Then, the artificial bees randomly discover a population of initial solutions and then iteratively improve them by employing the behavior: moving towards better solutions by means of a neighbor search mechanism while abandoning poor solutions. In this paper, an efficient multiuser detector based on a suboptimal code mapping multiuser detector and artificial bee colony algorithm (SCM-ABC-MUD is proposed and implemented in direct-sequence ultra-wideband (DS-UWB systems under the additive white Gaussian noise (AWGN channel. The simulation results demonstrate that the BER and the near-far effect resistance performances of this proposed algorithm are quite close to those of the optimum multiuser detector (OMD while its computational complexity is much lower than that of OMD. Furthermore, the BER performance of SCM-ABC-MUD is not sensitive to the number of active users and can obtain a large system capacity.

  11. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  12. Minimizing inner product data dependencies in conjugate gradient iteration

    Science.gov (United States)

    Vanrosendale, J.

    1983-01-01

    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).

  13. Image based rendering of iterated function systems

    NARCIS (Netherlands)

    Wijk, van J.J.; Saupe, D.

    2004-01-01

    A fast method to generate fractal imagery is presented. Iterated function systems (IFS) are based on repeatedly copying transformed images. We show that this can be directly translated into standard graphics operations: Each image is generated by texture mapping and blending copies of the previous

  14. Multiple depots vehicle routing based on the ant colony with the genetic algorithm

    Directory of Open Access Journals (Sweden)

    ChunYing Liu

    2013-09-01

    Full Text Available Purpose: the distribution routing plans of multi-depots vehicle scheduling problem will increase exponentially along with the adding of customers. So, it becomes an important studying trend to solve the vehicle scheduling problem with heuristic algorithm. On the basis of building the model of multi-depots vehicle scheduling problem, in order to improve the efficiency of the multiple depots vehicle routing, the paper puts forward a fusion algorithm on multiple depots vehicle routing based on the ant colony algorithm with genetic algorithm. Design/methodology/approach: to achieve this objective, the genetic algorithm optimizes the parameters of the ant colony algorithm. The fusion algorithm on multiple depots vehicle based on the ant colony algorithm with genetic algorithm is proposed. Findings: simulation experiment indicates that the result of the fusion algorithm is more excellent than the other algorithm, and the improved algorithm has better convergence effective and global ability. Research limitations/implications: in this research, there are some assumption that might affect the accuracy of the model such as the pheromone volatile factor, heuristic factor in each period, and the selected multiple depots. These assumptions can be relaxed in future work. Originality/value: In this research, a new method for the multiple depots vehicle routing is proposed. The fusion algorithm eliminate the influence of the selected parameter by optimizing the heuristic factor, evaporation factor, initial pheromone distribute, and have the strong global searching ability. The Ant Colony algorithm imports cross operator and mutation operator for operating the first best solution and the second best solution in every iteration, and reserves the best solution. The cross and mutation operator extend the solution space and improve the convergence effective and the global ability. This research shows that considering both the ant colony and genetic algorithm

  15. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    Science.gov (United States)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  16. Multistep Hybrid Iterations for Systems of Generalized Equilibria with Constraints of Several Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one multistep iterative algorithm by hybrid shrinking projection method for finding a solution of the system of generalized equilibria with constraints of several problems: the generalized mixed equilibrium problem, finitely many variational inclusions, the minimization problem for a convex and continuously Fréchet differentiable functional, and the fixed-point problem of an asymptotically strict pseudocontractive mapping in the intermediate sense in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another multistep iterative algorithm involving no shrinking projection method and derive its weak convergence under mild assumptions.

  17. Development of an iterative 3D reconstruction method for the control of heavy-ion oncotherapy with PET

    International Nuclear Information System (INIS)

    Lauckner, K.

    1999-06-01

    The dissertation reports the approach and work for developing and implementing an image space reconstruction method that allows to check the 3D activity distribution and detect possible deviations from irradiation planning data. Other than usual PET scanners, the BASTEI instrument is equipped with two detectors positioned at opposite sides above and below the patient, so that there is enough space for suitable positioning of patient and radiation source. Due to the restricted field of view of the positron camera, the 3D imaging process is subject to displacement-dependent variations, creating bad reconstruction conditions. In addition, the counting rate is lower by two or three orders of magnitude than the usual counting rates of nuclear-medicine PET applications. This is why an iterative 3D algorithm is needed. Two iterative methods known from conventional PET were examined for their suitability and compared with respect to results. The MLEM algorithm proposed by Shepp and Vardi interprets the measured data as a random sample of independent variables of Poisson distributions, to be used for assessing the unknown activity distribution. A disadvantage of this algorithm is the considerable calculation effort required. For minimizing the calculation effort, and in order to make iterative statistical methods applicable to measured 3D data, Daube-Whitherspoon and Muehllehner developed the Iterative Image Space Reconstruction Algorithm, ISRA, derived through modification of the sequence of development steps of the MLEM algorithm. Problem solution with ISRA is based on least square deviation method, other than with the MLEM algorithm which uses the best probability method. (orig./CB) [de

  18. Image quality in children with low-radiation chest CT using adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Directory of Open Access Journals (Sweden)

    Jihang Sun

    Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.

  19. The serial message-passing schedule for LDPC decoding algorithms

    Science.gov (United States)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  20. Preconditioners based on the Alternating-Direction-Implicit algorithm for the 2D steady-state diffusion equation with orthotropic heterogeneous coefficients

    KAUST Repository

    Gao, Longfei; Calo, Victor M.

    2015-01-01

    In this paper, we combine the Alternating Direction Implicit (ADI) algorithm with the concept of preconditioning and apply it to linear systems discretized from the 2D steady-state diffusion equations with orthotropic heterogeneous coefficients by the finite element method assuming tensor product basis functions. Specifically, we adopt the compound iteration idea and use ADI iterations as the preconditioner for the outside Krylov subspace method that is used to solve the preconditioned linear system. An efficient algorithm to perform each ADI iteration is crucial to the efficiency of the overall iterative scheme. We exploit the Kronecker product structure in the matrices, inherited from the tensor product basis functions, to achieve high efficiency in each ADI iteration. Meanwhile, in order to reduce the number of Krylov subspace iterations, we incorporate partially the coefficient information into the preconditioner by exploiting the local support property of the finite element basis functions. Numerical results demonstrated the efficiency and quality of the proposed preconditioner. © 2014 Elsevier B.V. All rights reserved.

  1. Satellite lithium-ion battery remaining useful life estimation with an iterative updated RVM fused with the KF algorithm

    Institute of Scientific and Technical Information of China (English)

    Yuchen SONG; Datong LIU; Yandong HOU; Jinxiang YU; Yu PENG

    2018-01-01

    Lithium-ion batteries have become the third-generation space batteries and are widely utilized in a series of spacecraft. Remaining Useful Life (RUL) estimation is essential to a spacecraft as the battery is a critical part and determines the lifetime and reliability. The Relevance Vector Machine (RVM) is a data-driven algorithm used to estimate a battery's RUL due to its sparse fea-ture and uncertainty management capability. Especially, some of the regressive cases indicate that the RVM can obtain a better short-term prediction performance rather than long-term prediction. As a nonlinear kernel learning algorithm, the coefficient matrix and relevance vectors are fixed once the RVM training is conducted. Moreover, the RVM can be simply influenced by the noise with the training data. Thus, this work proposes an iterative updated approach to improve the long-term prediction performance for a battery's RUL prediction. Firstly, when a new estimator is output by the RVM, the Kalman filter is applied to optimize this estimator with a physical degradation model. Then, this optimized estimator is added into the training set as an on-line sample, the RVM model is re-trained, and the coefficient matrix and relevance vectors can be dynamically adjusted to make next iterative prediction. Experimental results with a commercial battery test data set and a satellite battery data set both indicate that the proposed method can achieve a better per-formance for RUL estimation.

  2. Satellite lithium-ion battery remaining useful life estimation with an iterative updated RVM fused with the KF algorithm

    Directory of Open Access Journals (Sweden)

    Yuchen SONG

    2018-01-01

    Full Text Available Lithium-ion batteries have become the third-generation space batteries and are widely utilized in a series of spacecraft. Remaining Useful Life (RUL estimation is essential to a spacecraft as the battery is a critical part and determines the lifetime and reliability. The Relevance Vector Machine (RVM is a data-driven algorithm used to estimate a battery’s RUL due to its sparse feature and uncertainty management capability. Especially, some of the regressive cases indicate that the RVM can obtain a better short-term prediction performance rather than long-term prediction. As a nonlinear kernel learning algorithm, the coefficient matrix and relevance vectors are fixed once the RVM training is conducted. Moreover, the RVM can be simply influenced by the noise with the training data. Thus, this work proposes an iterative updated approach to improve the long-term prediction performance for a battery’s RUL prediction. Firstly, when a new estimator is output by the RVM, the Kalman filter is applied to optimize this estimator with a physical degradation model. Then, this optimized estimator is added into the training set as an on-line sample, the RVM model is re-trained, and the coefficient matrix and relevance vectors can be dynamically adjusted to make next iterative prediction. Experimental results with a commercial battery test data set and a satellite battery data set both indicate that the proposed method can achieve a better performance for RUL estimation.

  3. Improved Iterative Parallel Interference Cancellation Receiver for Future Wireless DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Andrea Bernacchioni

    2005-04-01

    Full Text Available We present a new turbo multiuser detector for turbo-coded direct sequence code division multiple access (DS-CDMA systems. The proposed detector is based on the utilization of a parallel interference cancellation (PIC and a bank of turbo decoders. The PIC is broken up in order to perform interference cancellation after each constituent decoder of the turbo decoding scheme. Moreover, in the paper we propose a new enhanced algorithm that provides a more accurate estimation of the signal-to-noise-plus-interference-ratio used in the tentative decision device and in the MAP decoding algorithm. The performance of the proposed receiver is evaluated by means of computer simulations for medium to very high system loads, in AWGN and multipath fading channel, and compared to recently proposed interference cancellation-based iterative MUD, by taking into account the number of iterations and the complexity involved. We will see that the proposed receiver outperforms the others especially for highly loaded systems.

  4. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    Science.gov (United States)

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  5. Image quality of ct angiography using model-based iterative reconstruction in infants with congenital heart disease: Comparison with filtered back projection and hybrid iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Qianjun, E-mail: jiaqianjun@126.com [Southern Medical University, Guangzhou, Guangdong (China); Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Zhuang, Jian, E-mail: zhuangjian5413@tom.com [Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Jiang, Jun, E-mail: 81711587@qq.com [Department of Radiology, Shenzhen Second People’s Hospital, Shenzhen, Guangdong (China); Li, Jiahua, E-mail: 970872804@qq.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Huang, Meiping, E-mail: huangmeiping_vip@163.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China); Liang, Changhong, E-mail: cjr.lchh@vip.163.com [Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China)

    2017-01-15

    Purpose: To compare the image quality, rate of coronary artery visualization and diagnostic accuracy of 256-slice multi-detector computed tomography angiography (CTA) with prospective electrocardiographic (ECG) triggering at a tube voltage of 80 kVp between 3 reconstruction algorithms (filtered back projection (FBP), hybrid iterative reconstruction (iDose{sup 4}) and iterative model reconstruction (IMR)) in infants with congenital heart disease (CHD). Methods: Fifty-one infants with CHD who underwent cardiac CTA in our institution between December 2014 and March 2015 were included. The effective radiation doses were calculated. Imaging data were reconstructed using the FBP, iDose{sup 4} and IMR algorithms. Parameters of objective image quality (noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR)); subjective image quality (overall image quality, image noise and margin sharpness); coronary artery visibility; and diagnostic accuracy for the three algorithms were measured and compared. Results: The mean effective radiation dose was 0.61 ± 0.32 mSv. Compared to FBP and iDose{sup 4}, IMR yielded significantly lower noise (P < 0.01), higher SNR and CNR values (P < 0.01), and a greater subjective image quality score (P < 0.01). The total number of coronary segments visualized was significantly higher for both iDose{sup 4} and IMR than for FBP (P = 0.002 and P = 0.025, respectively), but there was no significant difference in this parameter between iDose{sup 4} and IMR (P = 0.397). There was no significant difference in the diagnostic accuracy between the FBP, iDose{sup 4} and IMR algorithms (χ{sup 2} = 0.343, P = 0.842). Conclusions: For infants with CHD undergoing cardiac CTA, the IMR reconstruction algorithm provided significantly increased objective and subjective image quality compared with the FBP and iDose{sup 4} algorithms. However, IMR did not improve the diagnostic accuracy or coronary artery visualization compared with iDose{sup 4}.

  6. Toward Generalization of Iterative Small Molecule Synthesis.

    Science.gov (United States)

    Lehmann, Jonathan W; Blair, Daniel J; Burke, Martin D

    2018-02-01

    Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the "building block approach", i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach.

  7. Toward Generalization of Iterative Small Molecule Synthesis

    Science.gov (United States)

    Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.

    2018-01-01

    Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152

  8. An improved reconstruction algorithm based on multi-user detection for uplink grant-free NOMA

    Directory of Open Access Journals (Sweden)

    Hou Chengyan

    2017-01-01

    Full Text Available For the traditional orthogonal matching pursuit(OMP algorithm in multi-user detection(MUD for uplink grant-free NOMA, here is a poor BER performance, so in this paper we propose an temporal-correlation orthogonal matching pursuit algorithm(TOMP to realize muli-user detection. The core idea of the TOMP is to use the time correlation of the active user sets to achieve user activity and data detection in a number of continuous time slots. We use the estimated active user set in the current time slot as a priori information to estimate the active user sets for the next slot. By maintaining the active user set Tˆl of size K(K is the number of users, but modified in each iteration. Specifically, active user set is believed to be reliable in one iteration but shown error in another iteration, can be added to the set path delay Tˆl or removed from it. Theoretical analysis of the improved algorithm provide a guarantee that the multi-user can be successfully detected with a high probability. The simulation results show that the proposed scheme can achieve better bit error rate (BER performance in the uplink grant-free NOMA system.

  9. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  10. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  11. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    International Nuclear Information System (INIS)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo

    2014-01-01

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  12. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  13. Scaling Sparse Matrices for Optimization Algorithms

    OpenAIRE

    Gajulapalli Ravindra S; Lasdon Leon S

    2006-01-01

    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performan...

  14. A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2005-01-01

    Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.

  15. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    Science.gov (United States)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  16. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    Science.gov (United States)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  17. Comparison of adaptive statistical iterative reconstruction (ASiRTM) and model-based iterative reconstruction (VeoTM) for paediatric abdominal CT examinations: an observer performance study of diagnostic image quality

    International Nuclear Information System (INIS)

    Hultenmo, Maria; Caisander, Haakan; Mack, Karsten; Thilander-Klang, Anne

    2016-01-01

    The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR TM ) and model-based IR (Veo TM )-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft TM convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. (authors)

  18. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  19. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  20. Sub-OBB based object recognition and localization algorithm using range images

    International Nuclear Information System (INIS)

    Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung

    2017-01-01

    This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization. (paper)

  1. A sparse matrix based full-configuration interaction algorithm

    International Nuclear Information System (INIS)

    Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.

    2008-01-01

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers

  2. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  3. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  4. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Directory of Open Access Journals (Sweden)

    Jiahui Meng

    2018-01-01

    Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  5. [Automatic Sleep Stage Classification Based on an Improved K-means Clustering Algorithm].

    Science.gov (United States)

    Xiao, Shuyuan; Wang, Bei; Zhang, Jian; Zhang, Qunfeng; Zou, Junzhong

    2016-10-01

    Sleep stage scoring is a hotspot in the field of medicine and neuroscience.Visual inspection of sleep is laborious and the results may be subjective to different clinicians.Automatic sleep stage classification algorithm can be used to reduce the manual workload.However,there are still limitations when it encounters complicated and changeable clinical cases.The purpose of this paper is to develop an automatic sleep staging algorithm based on the characteristics of actual sleep data.In the proposed improved K-means clustering algorithm,points were selected as the initial centers by using a concept of density to avoid the randomness of the original K-means algorithm.Meanwhile,the cluster centers were updated according to the‘Three-Sigma Rule’during the iteration to abate the influence of the outliers.The proposed method was tested and analyzed on the overnight sleep data of the healthy persons and patients with sleep disorders after continuous positive airway pressure(CPAP)treatment.The automatic sleep stage classification results were compared with the visual inspection by qualified clinicians and the averaged accuracy reached 76%.With the analysis of morphological diversity of sleep data,it was proved that the proposed improved K-means algorithm was feasible and valid for clinical practice.

  6. A novel iterative energy calibration method for composite germanium detectors

    International Nuclear Information System (INIS)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.

    2004-01-01

    An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam γ-ray spectroscopy experiment is presented

  7. A novel iterative energy calibration method for composite germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in

    2004-07-01

    An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam {gamma}-ray spectroscopy experiment is presented.

  8. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-01-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural

  9. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural

  10. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solis Sanches, L. O.; Miranda, R. Castaneda; Cervantes Viramontes, J. M. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac (Mexico); Vega-Carrillo, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac., Mexico. and Unidad Academica de Estudios Nucleares. C. Cip (Mexico)

    2013-07-03

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in

  11. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. [Purdue Univ., West Lafayette, IN (United States)

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  12. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    Science.gov (United States)

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  13. Iterative Adaptive Sampling For Accurate Direct Illumination

    National Research Council Canada - National Science Library

    Donikian, Michael

    2004-01-01

    This thesis introduces a new multipass algorithm, Iterative Adaptive Sampling, for efficiently computing the direct illumination in scenes with many lights, including area lights that cause realistic soft shadows...

  14. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    International Nuclear Information System (INIS)

    Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-01-01

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  15. Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems

    Science.gov (United States)

    Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.

    1993-01-01

    In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).

  16. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  17. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Dodge, C.T.; Rong, J. [MD Anderson Cancer Center, Houston, TX (United States); Dodge, C.W. [Methodist Hospital, Houston, TX (United States)

    2014-06-15

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% and 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.

  18. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    International Nuclear Information System (INIS)

    Dodge, C.T.; Rong, J.; Dodge, C.W.

    2014-01-01

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% and 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary

  19. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  20. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    Science.gov (United States)

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  1. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  2. An Iterative Brinkman penalization for particle vortex methods

    DEFF Research Database (Denmark)

    Walther, Jens Honore; Hejlesen, Mads Mølholm; Leonard, A.

    2013-01-01

    We present an iterative Brinkman penalization method for the enforcement of the no-slip boundary condition in vortex particle methods. This is achieved by implementing a penalization of the velocity field using iteration of the penalized vorticity. We show that using the conventional Brinkman...... condition. These are: the impulsively started flow past a cylinder, the impulsively started flow normal to a flat plate, and the uniformly accelerated flow normal to a flat plate. The iterative penalization algorithm is shown to give significantly improved results compared to the conventional penalization...

  3. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  4. Worst-case Analysis of Strategy Iteration and the Simplex Method

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm

    In this dissertation we study strategy iteration (also known as policy iteration) algorithms for solving Markov decision processes (MDPs) and two-player turn-based stochastic games (2TBSGs). MDPs provide a mathematical model for sequential decision making under uncertainty. They are widely used...... to model stochastic optimization problems in various areas ranging from operations research, machine learning, artificial intelligence, economics and game theory. The class of two-player turn-based stochastic games is a natural generalization of Markov decision processes that is obtained by introducing...... in the size of the problem (the bounds have subexponential form). Utilizing a tight connection between MDPs and linear programming, it is shown that the same bounds apply to the corresponding pivoting rules for the simplex method for solving linear programs. Prior to this result no super-polynomial lower...

  5. Matrix completion via a low rank factorization model and an Augmented Lagrangean Succesive Overrelaxation Algorithm

    Directory of Open Access Journals (Sweden)

    Hugo Lara

    2014-12-01

    Full Text Available The matrix completion problem (MC has been approximated by using the nuclear norm relaxation. Some algorithms based on this strategy require the computationally expensive singular value decomposition (SVD at each iteration. One way to avoid SVD calculations is to use alternating methods, which pursue the completion through matrix factorization with a low rank condition. In this work an augmented Lagrangean-type alternating algorithm is proposed. The new algorithm uses duality information to define the iterations, in contrast to the solely primal LMaFit algorithm, which employs a Successive Over Relaxation scheme. The convergence result is studied. Some numerical experiments are given to compare numerical performance of both proposals.

  6. Simulation-based algorithms for Markov decision processes

    CERN Document Server

    Chang, Hyeong Soo; Fu, Michael C; Marcus, Steven I

    2013-01-01

    Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel ...

  7. An efficient iteration strategy for the solution of the Euler equations

    Science.gov (United States)

    Walters, R. W.; Dwoyer, D. L.

    1985-01-01

    A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two-dimensions is described. The basic algorithm has the property that convergence to the steady-state is quadratic for fully supersonic flows and linear otherwise. This is in contrast to the block ADI methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented here is easily enhanced to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, thus yielding a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing both oblique and normal shock waves which confirm the efficiency of the iteration strategy.

  8. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.; He, Q. [Arizona State Univ., Tempe, AZ (United States)

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  9. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    Science.gov (United States)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  10. Non-iterative distance constraints enforcement for cloth drapes simulation

    Science.gov (United States)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  11. Impact of a New Adaptive Statistical Iterative Reconstruction (ASIR)-V Algorithm on Image Quality in Coronary Computed Tomography Angiography.

    Science.gov (United States)

    Pontone, Gianluca; Muscogiuri, Giuseppe; Andreini, Daniele; Guaricci, Andrea I; Guglielmo, Marco; Baggiano, Andrea; Fazzari, Fabio; Mushtaq, Saima; Conte, Edoardo; Annoni, Andrea; Formenti, Alberto; Mancini, Elisabetta; Verdecchia, Massimo; Campari, Alessandro; Martini, Chiara; Gatti, Marco; Fusini, Laura; Bonfanti, Lorenzo; Consiglio, Elisa; Rabbat, Mark G; Bartorelli, Antonio L; Pepi, Mauro

    2018-03-27

    A new postprocessing algorithm named adaptive statistical iterative reconstruction (ASIR)-V has been recently introduced. The aim of this article was to analyze the impact of ASIR-V algorithm on signal, noise, and image quality of coronary computed tomography angiography. Fifty consecutive patients underwent clinically indicated coronary computed tomography angiography (Revolution CT; GE Healthcare, Milwaukee, WI). Images were reconstructed using filtered back projection and ASIR-V 0%, and a combination of filtered back projection and ASIR-V 20%-80% and ASIR-V 100%. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were calculated for left main coronary artery (LM), left anterior descending artery (LAD), left circumflex artery (LCX), and right coronary artery (RCA) and were compared between the different postprocessing algorithms used. Similarly a four-point Likert image quality score of coronary segments was graded for each dataset and compared. A cutoff value of P ASIR-V 0%, ASIR-V 100% demonstrated a significant reduction of image noise in all coronaries (P ASIR-V 0%, SNR was significantly higher with ASIR-V 60% in LM (P ASIR-V 0%, CNR for ASIR-V ≥60% was significantly improved in LM (P ASIR-V ≥80%. ASIR-V 60% had significantly better Likert image quality scores compared to ASIR-V 0% in segment-, vessel-, and patient-based analyses (P ASIR-V 60% provides the optimal balance between image noise, SNR, CNR, and image quality. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  12. Vascular diameter measurement in CT angiography: comparison of model-based iterative reconstruction and standard filtered back projection algorithms in vitro.

    Science.gov (United States)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2013-03-01

    The purpose of this study was to evaluate the performance of model-based iterative reconstruction (MBIR) in measurement of the inner diameter of models of blood vessels and compare performance between MBIR and a standard filtered back projection (FBP) algorithm. Vascular models with wall thicknesses of 0.5, 1.0, and 1.5 mm were scanned with a 64-MDCT unit and densities of contrast material yielding 275, 396, and 542 HU. Images were reconstructed images by MBIR and FBP, and the mean diameter of each model vessel was measured by software automation. Twenty separate measurements were repeated for each vessel, and variance among the repeated measures was analyzed for determination of measurement error. For all nine model vessels, CT attenuation profiles were compared along a line passing through the luminal center on axial images reconstructed with FBP and MBIR, and the 10-90% edge rise distances at the boundary between the vascular wall and the lumen were evaluated. For images reconstructed with FBP, measurement errors were smallest for models with 1.5-mm wall thickness, except those filled with 275-HU contrast material, and errors grew as the density of the contrast material decreased. Measurement errors with MBIR were comparable to or less than those with FBP. In CT attenuation profiles of images reconstructed with MBIR, the 10-90% edge rise distances at the boundary between the lumen and vascular wall were relatively short for each vascular model compared with those of the profile curves of FBP images. MBIR is better than standard FBP for reducing reconstruction blur and improving the accuracy of diameter measurement at CT angiography.

  13. Hybrid phase retrieval algorithm for solving the twin image problem in in-line digital holography

    Science.gov (United States)

    Zhao, Jie; Wang, Dayong; Zhang, Fucai; Wang, Yunxin

    2010-10-01

    For the reconstruction in the in-line digital holography, there are three terms overlapping with each other on the image plane, named the zero order term, the real image and the twin image respectively. The unwanted twin image degrades the real image seriously. A hybrid phase retrieval algorithm is presented to address this problem, which combines the advantages of two popular phase retrieval algorithms. One is the improved version of the universal iterative algorithm (UIA), called the phase flipping-based UIA (PFB-UIA). The key point of this algorithm is to flip the phase of the object iteratively. It is proved that the PFB-UIA is able to find the support of the complicated object. Another one is the Fienup algorithm, which is a kind of well-developed algorithm and uses the support of the object as the constraint among the iteration procedure. Thus, by following the Fienup algorithm immediately after the PFB-UIA, it is possible to produce the amplitude and the phase distributions of the object with high fidelity. The primary simulated results showed that the proposed algorithm is powerful for solving the twin image problem in the in-line digital holography.

  14. New developments in iterated rounding

    NARCIS (Netherlands)

    Bansal, N.; Raman, V.; Suresh, S.P.

    2014-01-01

    Iterated rounding is a relatively recent technique in algorithm design, that despite its simplicity has led to several remarkable new results and also simpler proofs of many previous results. We will briefly survey some applications of the method, including some recent developments and giving a high

  15. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    Science.gov (United States)

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Simulation-based design process for the verification of ITER remote handling systems

    International Nuclear Information System (INIS)

    Sibois, Romain; Määttä, Timo; Siuko, Mikko; Mattila, Jouni

    2014-01-01

    Highlights: •Verification and validation process for ITER remote handling system. •Simulation-based design process for early verification of ITER RH systems. •Design process centralized around simulation lifecycle management system. •Verification and validation roadmap for digital modelling phase. -- Abstract: The work behind this paper takes place in the EFDA's European Goal Oriented Training programme on Remote Handling (RH) “GOT-RH”. The programme aims to train engineers for activities supporting the ITER project and the long-term fusion programme. One of the projects of this programme focuses on the verification and validation (V and V) of ITER RH system requirements using digital mock-ups (DMU). The purpose of this project is to study and develop efficient approach of using DMUs in the V and V process of ITER RH system design utilizing a System Engineering (SE) framework. Complex engineering systems such as ITER facilities lead to substantial rise of cost while manufacturing the full-scale prototype. In the V and V process for ITER RH equipment, physical tests are a requirement to ensure the compliance of the system according to the required operation. Therefore it is essential to virtually verify the developed system before starting the prototype manufacturing phase. This paper gives an overview of the current trends in using digital mock-up within product design processes. It suggests a simulation-based process design centralized around a simulation lifecycle management system. The purpose of this paper is to describe possible improvements in the formalization of the ITER RH design process and V and V processes, in order to increase their cost efficiency and reliability

  17. Coordinated Active Power Dispatch for a Microgrid via Distributed Lambda Iteration

    DEFF Research Database (Denmark)

    Hu, Jianqiang; Z. Q. Chen, Michael; Cao, Jinde

    2017-01-01

    A novel distributed optimal dispatch algorithm is proposed for coordinating the operation of multiple micro units in a microgrid, which has incorporated the distributed consensus algorithm in multi-agent systems and the -iteration optimization algorithm in economic dispatch of power systems. Spec...

  18. Iterative learning control with sampled-data feedback for robot manipulators

    Directory of Open Access Journals (Sweden)

    Delchev Kamen

    2014-09-01

    Full Text Available This paper deals with the improvement of the stability of sampled-data (SD feedback control for nonlinear multiple-input multiple-output time varying systems, such as robotic manipulators, by incorporating an off-line model based nonlinear iterative learning controller. The proposed scheme of nonlinear iterative learning control (NILC with SD feedback is applicable to a large class of robots because the sampled-data feedback is required for model based feedback controllers, especially for robotic manipulators with complicated dynamics (6 or 7 DOF, or more, while the feedforward control from the off-line iterative learning controller should be assumed as a continuous one. The robustness and convergence of the proposed NILC law with SD feedback is proven, and the derived sufficient condition for convergence is the same as the condition for a NILC with a continuous feedback control input. With respect to the presented NILC algorithm applied to a virtual PUMA 560 robot, simulation results are presented in order to verify convergence and applicability of the proposed learning controller with SD feedback controller attached

  19. Physics research needs for ITER

    International Nuclear Information System (INIS)

    Sauthoff, N.R.

    1995-01-01

    Design of ITER entails the application of physics design tools that have been validated against the world-wide data base of fusion research. In many cases, these tools do not yet exist and must be developed as part of the ITER physics program. ITER's considerable increases in power and size demand significant extrapolations from the current data base; in several cases, new physical effects are projected to dominate the behavior of the ITER plasma. This paper focuses on those design tools and data that have been identified by the ITER team and are not yet available; these needs serve as the basis for the ITER Physics Research Needs, which have been developed jointly by the ITER Physics Expert Groups and the ITER design team. Development of the tools and the supporting data base is an on-going activity that constitutes a significant opportunity for contributions to the ITER program by fusion research programs world-wide

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  1. An iterative two-step algorithm for American option pricing

    Czech Academy of Sciences Publication Activity Database

    Siddiqi, A. H.; Manchanda, P.; Kočvara, Michal

    2000-01-01

    Roč. 11, č. 2 (2000), s. 71-84 ISSN 0953-0061 R&D Projects: GA AV ČR IAA1075707 Institutional research plan: AV0Z1075907 Keywords : American option pricing * linear complementarity * iterative methods Subject RIV: AH - Economics

  2. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lihui Jiang

    2015-07-01

    Full Text Available Interference alignment (IA is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs when the signal-to-noise ratio (SNR is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  3. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.

    Science.gov (United States)

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-07-29

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  4. Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty

    Science.gov (United States)

    Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design

  5. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2017-01-01

    steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without

  6. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    Science.gov (United States)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  7. Physics of the conceptual design of the ITER plasma control system

    Energy Technology Data Exchange (ETDEWEB)

    Snipes, J.A., E-mail: Joseph.Snipes@iter.org [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul-lez-Durance (France); Bremond, S. [CEA-IRFM, 13108 St Paul-lez-Durance (France); Campbell, D.J. [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul-lez-Durance (France); Casper, T. [1166 Bordeaux St, Pleasanton, CA 94566 (United States); Douai, D. [CEA-IRFM, 13108 St Paul-lez-Durance (France); Gribov, Y. [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul-lez-Durance (France); Humphreys, D. [General Atomics, San Diego, CA 92186 (United States); Lister, J. [Association EURATOM-Confédération Suisse, Ecole Polytechnique Fédérale de Lausanne (EPFL), CRPP, Lausanne CH-1015 (Switzerland); Loarte, A.; Pitts, R. [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul-lez-Durance (France); Sugihara, M., E-mail: Sugihara_ma@yahoo.co.jp [Japan (Japan); Winter, A.; Zabeo, L. [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul-lez-Durance (France)

    2014-05-15

    Highlights: • ITER plasma control system conceptual design has been finalized. • ITER's plasma control system will evolve with the ITER research plan. • A sophisticated actuator sharing scheme is being developed to apply multiple coupled control actions simultaneously with a limited set of actuators. - Abstract: The ITER plasma control system (PCS) will play a central role in enabling the experimental program to attempt to sustain DT plasmas with Q = 10 for several hundred seconds and also support research toward the development of steady-state operation in ITER. The PCS is now in the final phase of its conceptual design. The PCS relies on about 45 diagnostic systems to assess real-time plasma conditions and about 20 actuator systems for overall control of ITER plasmas. It will integrate algorithms required for active control of a wide range of plasma parameters with sophisticated event forecasting and handling functions, which will enable appropriate transitions to be implemented, in real-time, in response to plasma evolution or actuator constraints. In specifying the PCS conceptual design, it is essential to define requirements related to all phases of plasma operation, ranging from early (non-active) H/He plasmas through high fusion gain inductive plasmas to fully non-inductive steady-state operation, to ensure that the PCS control functionality and architecture will be capable of satisfying the demands of the ITER research plan. The scope of the control functionality required of the PCS includes plasma equilibrium and density control commonly utilized in existing experiments, control of the plasma heat exhaust, control of a range of MHD instabilities (including mitigation of disruptions), and aspects such as control of the non-inductive current and the current profile required to maintain stable plasmas in steady-state scenarios. Control areas are often strongly coupled and the integrated control of the plasma to reach and sustain high plasma

  8. Physics of the conceptual design of the ITER plasma control system

    International Nuclear Information System (INIS)

    Snipes, J.A.; Bremond, S.; Campbell, D.J.; Casper, T.; Douai, D.; Gribov, Y.; Humphreys, D.; Lister, J.; Loarte, A.; Pitts, R.; Sugihara, M.; Winter, A.; Zabeo, L.

    2014-01-01

    Highlights: • ITER plasma control system conceptual design has been finalized. • ITER's plasma control system will evolve with the ITER research plan. • A sophisticated actuator sharing scheme is being developed to apply multiple coupled control actions simultaneously with a limited set of actuators. - Abstract: The ITER plasma control system (PCS) will play a central role in enabling the experimental program to attempt to sustain DT plasmas with Q = 10 for several hundred seconds and also support research toward the development of steady-state operation in ITER. The PCS is now in the final phase of its conceptual design. The PCS relies on about 45 diagnostic systems to assess real-time plasma conditions and about 20 actuator systems for overall control of ITER plasmas. It will integrate algorithms required for active control of a wide range of plasma parameters with sophisticated event forecasting and handling functions, which will enable appropriate transitions to be implemented, in real-time, in response to plasma evolution or actuator constraints. In specifying the PCS conceptual design, it is essential to define requirements related to all phases of plasma operation, ranging from early (non-active) H/He plasmas through high fusion gain inductive plasmas to fully non-inductive steady-state operation, to ensure that the PCS control functionality and architecture will be capable of satisfying the demands of the ITER research plan. The scope of the control functionality required of the PCS includes plasma equilibrium and density control commonly utilized in existing experiments, control of the plasma heat exhaust, control of a range of MHD instabilities (including mitigation of disruptions), and aspects such as control of the non-inductive current and the current profile required to maintain stable plasmas in steady-state scenarios. Control areas are often strongly coupled and the integrated control of the plasma to reach and sustain high plasma

  9. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  10. Optical image encryption using password key based on phase retrieval algorithm

    Science.gov (United States)

    Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing

    2016-04-01

    A novel optical image encryption system is proposed using password key based on phase retrieval algorithm (PRA). In the encryption process, a shared image is taken as a symmetric key and the plaintext is encoded into the phase-only mask based on the iterative PRA. The linear relationship between the plaintext and ciphertext is broken using the password key, which can resist the known plaintext attack. The symmetric key and the retrieved phase are imported into the input plane and Fourier plane of 4f system during the decryption, respectively, so as to obtain the plaintext on the CCD. Finally, we analyse the key space of the password key, and the results show that the proposed scheme can resist a brute force attack due to the flexibility of the password key.

  11. Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”

    Directory of Open Access Journals (Sweden)

    Ji-Huan He

    2012-01-01

    boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.

  12. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim

    2011-11-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  13. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim; El Sayed, Tamer S.; Ramadan, Hany E.

    2011-01-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop's task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application's latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  14. Velocity Tracking Control of Wheeled Mobile Robots by Iterative Learning Control

    Directory of Open Access Journals (Sweden)

    Xiaochun Lu

    2016-05-01

    Full Text Available This paper presents an iterative learning control (ILC strategy to resolve the trajectory tracking problem of wheeled mobile robots (WMRs based on dynamic model. In the previous study of WMRs’ trajectory tracking, ILC was usually applied to the kinematical model of WMRs with the assumption that desired velocity can be tracked immediately. However, this assumption cannot be realized in the real world at all. The kinematic and dynamic models of WMRs are deduced in this chapter, and a novel combination of D-type ILC algorithm and dynamic model of WMR with random bounded disturbances are presented. To analyze the convergence of the algorithm, the method of contracting mapping, which shows that the designed controller can make the velocity tracking errors converge to zero completely when the iteration times tend to infinite, is adopted. Simulation results show the effectiveness of D-type ILC in the trajectory tracking problem of WMRs, demonstrating the effectiveness and robustness of the algorithm in the condition of random bounded disturbance. A comparative study conducted between D-type ILC and compound cosine function neural network (NN controller also demonstrates the effectiveness of the ILC strategy.

  15. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    Science.gov (United States)

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  16. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  17. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    International Nuclear Information System (INIS)

    Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K

    2015-01-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)

  18. Understandings of the Concept of Iteration in Design-Based Research

    DEFF Research Database (Denmark)

    Gundersen, Peter Bukovica

    2017-01-01

    The paper is the first in a series of papers addressing design in Design-based research. The series looks into the question of how this research approach is connected to design. What happens when educational researchers adopt designerly ways of working? This paper provides an overview of design......-based research and from there on discuss one key characteristic, namely iterations, which are fundamental to educational design research in relation to how designers operate and why. The paper concludes that in general iteration is not a particularly well-described aspect in the reporting of DBR-projects. Half...... and usually after long periods of testing design solutions in practice....

  19. Dataflow-Based Mapping of Computer Vision Algorithms onto FPGAs

    Directory of Open Access Journals (Sweden)

    Ivan Corretjer

    2007-01-01

    Full Text Available We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF, which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.

  20. A new subspace based approach to iterative learning control

    NARCIS (Netherlands)

    Nijsse, G.; Verhaegen, M.; Doelman, N.J.

    2001-01-01

    This paper1 presents an iterative learning control (ILC) procedure based on an inverse model of the plant under control. Our first contribution is that we formulate the inversion procedure as a Kalman smoothing problem: based on a compact state space model of a possibly non-minimum phase system,

  1. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  2. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    International Nuclear Information System (INIS)

    Zhang, Hanming; Wang, Linyuan; Li, Lei; Cai, Ailong; Hu, Guoen; Yan, Bin

    2016-01-01

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of two steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by

  3. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  4. Bayesian Maximum Entropy Based Algorithm for Digital X-ray Mammogram Processing

    Directory of Open Access Journals (Sweden)

    Radu Mutihac

    2009-06-01

    Full Text Available Basics of Bayesian statistics in inverse problems using the maximum entropy principle are summarized in connection with the restoration of positive, additive images from various types of data like X-ray digital mammograms. An efficient iterative algorithm for image restoration from large data sets based on the conjugate gradient method and Lagrange multipliers in nonlinear optimization of a specific potential function was developed. The point spread function of the imaging system was determined by numerical simulations of inhomogeneous breast-like tissue with microcalcification inclusions of various opacities. The processed digital and digitized mammograms resulted superior in comparison with their raw counterparts in terms of contrast, resolution, noise, and visibility of details.

  5. On Newton-Raphson formulation and algorithm for displacement based structural dynamics problem with quadratic damping nonlinearity

    Directory of Open Access Journals (Sweden)

    Koh Kim Jie

    2017-01-01

    Full Text Available Quadratic damping nonlinearity is challenging for displacement based structural dynamics problem as the problem is nonlinear in time derivative of the primitive variable. For such nonlinearity, the formulation of tangent stiffness matrix is not lucid in the literature. Consequently, ambiguity related to kinematics update arises when implementing the time integration-iterative algorithm. In present work, an Euler-Bernoulli beam vibration problem with quadratic damping nonlinearity is addressed as the main source of quadratic damping nonlinearity arises from drag force estimation, which is generally valid only for slender structures. Employing Newton-Raphson formulation, tangent stiffness components associated with quadratic damping nonlinearity requires velocity input for evaluation purpose. For this reason, two mathematically equivalent algorithm structures with different kinematics arrangement are tested. Both algorithm structures result in the same accuracy and convergence characteristic of solution.

  6. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  7. INTELLIGENT FRACTIONAL ORDER ITERATIVE LEARNING CONTROL USING FEEDBACK LINEARIZATION FOR A SINGLE-LINK ROBOT

    Directory of Open Access Journals (Sweden)

    Iman Ghasemi

    2017-05-01

    Full Text Available In this paper, iterative learning control (ILC is combined with an optimal fractional order derivative (BBO-Da-type ILC and optimal fractional and proportional-derivative (BBO-PDa-type ILC. In the update law of Arimoto's derivative iterative learning control, a first order derivative of tracking error signal is used. In the proposed method, fractional order derivative of the error signal is stated in term of 'sa' where  to update iterative learning control law. Two types of fractional order iterative learning control namely PDa-type ILC and Da-type ILC are gained for different value of a. In order to improve the performance of closed-loop control system, coefficients of both  and  learning law i.e. proportional , derivative  and  are optimized using Biogeography-Based optimization algorithm (BBO. Outcome of the simulation results are compared with those of the conventional fractional order iterative learning control to verify effectiveness of BBO-Da-type ILC and BBO-PDa-type ILC

  8. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    Science.gov (United States)

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P  = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P  < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  9. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  10. D-Iteration: diffusion approach for solving PageRank

    OpenAIRE

    Hong, Dohy; Huynh, The Dang; Mathieu, Fabien

    2015-01-01

    In this paper we present a new method that can accelerate the computation of the PageRank importance vector. Our method, called D-Iteration (DI), is based on the decomposition of the matrix-vector product that can be seen as a fluid diffusion model and is potentially adapted to asynchronous implementation. We give theoretical results about the convergence of our algorithm and we show through experimentations on a real Web graph that DI can improve the computation efficiency compared to other ...

  11. Iterative algorithms to approximate canonical Gabor windows: Computational aspects

    DEFF Research Database (Denmark)

    Janssen, A.J.E.M; Søndergaard, Peter Lempel

    in the iteration step: norm scaling, where in each step the windows are normalized, and initial scaling where we only scale in the very beginning. Norm scaling leads to fast, but conditionally convergent methods, while initial scaling leads to unconditionally convergent methods, but with possibly suboptimal...

  12. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  13. TurboFold: Iterative probabilistic estimation of secondary structures for multiple RNA sequences

    Directory of Open Access Journals (Sweden)

    Sharma Gaurav

    2011-04-01

    significance threshold are shown to be more accurate for TurboFold than for alternative methods that estimate base pairing probabilities. TurboFold-MEA, which uses base pairing probabilities from TurboFold in a maximum expected accuracy algorithm for secondary structure prediction, has accuracy comparable to the best performing secondary structure prediction methods. The computational and memory requirements for TurboFold are modest and, in terms of sequence length and number of sequences, scale much more favorably than joint alignment and folding algorithms. Conclusions TurboFold is an iterative probabilistic method for predicting secondary structures for multiple RNA sequences that efficiently and accurately combines the information from the comparative analysis between sequences with the thermodynamic folding model. Unlike most other multi-sequence structure prediction methods, TurboFold does not enforce strict commonality of structures and is therefore useful for predicting structures for homologous sequences that have diverged significantly. TurboFold can be downloaded as part of the RNAstructure package at http://rna.urmc.rochester.edu.

  14. An efficient IMPES-based, shifting matrix algorithm to simulate two-phase, immiscible flow in porous media with application to CO 2 sequestration in the subsurface

    KAUST Repository

    Salama, Amgad; Sun, Shuyu; El-Amin, Mohamed

    2012-01-01

    algorithms to this problem are based on discretizing the governing laws on a generic cell and then proceed to the other cells within loops. Therefore, it is expected that, calling and iterating these loops several times can take significant amount of CPU time

  15. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    International Nuclear Information System (INIS)

    Tohme, Michel S; Qi Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 x 3 line phantom, an ultra-micro resolution phantom and a 22 Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  16. GNSS troposphere tomography based on two-step reconstructions using GPS observations and COSMIC profiles

    Directory of Open Access Journals (Sweden)

    P. Xia

    2013-10-01

    Full Text Available Traditionally, balloon-based radiosonde soundings are used to study the spatial distribution of atmospheric water vapour. However, this approach cannot be frequently employed due to its high cost. In contrast, GPS tomography technique can obtain water vapour in a high temporal resolution. In the tomography technique, an iterative or non-iterative reconstruction algorithm is usually utilised to overcome rank deficiency of observation equations for water vapour inversion. However, the single iterative or non-iterative reconstruction algorithm has their limitations. For instance, the iterative reconstruction algorithm requires accurate initial values of water vapour while the non-iterative reconstruction algorithm needs proper constraint conditions. To overcome these drawbacks, we present a combined iterative and non-iterative reconstruction approach for the three-dimensional (3-D water vapour inversion using GPS observations and COSMIC profiles. In this approach, the non-iterative reconstruction algorithm is first used to estimate water vapour density based on a priori water vapour information derived from COSMIC radio occultation data. The estimates are then employed as initial values in the iterative reconstruction algorithm. The largest advantage of this approach is that precise initial values of water vapour density that are essential in the iterative reconstruction algorithm can be obtained. This combined reconstruction algorithm (CRA is evaluated using 10-day GPS observations in Hong Kong and COSMIC profiles. The test results indicate that the water vapor accuracy from CRA is 16 and 14% higher than that of iterative and non-iterative reconstruction approaches, respectively. In addition, the tomography results obtained from the CRA are further validated using radiosonde data. Results indicate that water vapour densities derived from the CRA agree with radiosonde results very well at altitudes above 2.5 km. The average RMS value of their

  17. Fitness Estimation Based Particle Swarm Optimization Algorithm for Layout Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Ayang Xiao

    2014-01-01

    Full Text Available Due to the fact that vastly different variables and constraints are simultaneously considered, truss layout optimization is a typical difficult constrained mixed-integer nonlinear program. Moreover, the computational cost of truss analysis is often quite expensive. In this paper, a novel fitness estimation based particle swarm optimization algorithm with an adaptive penalty function approach (FEPSO-AP is proposed to handle this problem. FEPSO-AP adopts a special fitness estimate strategy to evaluate the similar particles in the current population, with the purpose to reduce the computational cost. Further more, a laconic adaptive penalty function is employed by FEPSO-AP, which can handle multiple constraints effectively by making good use of historical iteration information. Four benchmark examples with fixed topologies and up to 44 design dimensions were studied to verify the generality and efficiency of the proposed algorithm. Numerical results of the present work compared with results of other state-of-the-art hybrid algorithms shown in the literature demonstrate that the convergence rate and the solution quality of FEPSO-AP are essentially competitive.

  18. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2013-01-01

    the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives

  19. Variational Iteration Method for Fifth-Order Boundary Value Problems Using He's Polynomials

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We apply the variational iteration method using He's polynomials (VIMHP for solving the fifth-order boundary value problems. The proposed method is an elegant combination of variational iteration and the homotopy perturbation methods and is mainly due to Ghorbani (2007. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The proposed iterative scheme finds the solution without any discritization, linearization, or restrictive assumptions. Several examples are given to verify the reliability and efficiency of the method. The fact that the proposed technique solves nonlinear problems without using Adomian's polynomials can be considered as a clear advantage of this algorithm over the decomposition method.

  20. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    Science.gov (United States)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.