WorldWideScience

Sample records for policy iteration algorithm

  1. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  2. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    Science.gov (United States)

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2017-10-01

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  3. Iterative Algorithms for Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Yao Yonghong

    2008-01-01

    Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .

  4. Exponential Lower Bounds For Policy Iteration

    OpenAIRE

    Fearnley, John

    2010-01-01

    We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.

  5. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  6. Perturbation resilience and superiorization of iterative algorithms

    International Nuclear Information System (INIS)

    Censor, Y; Davidi, R; Herman, G T

    2010-01-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image

  7. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization...... solutions can aid in iterative image reconstruction algorithm design. This issue is particularly acute for iterative image reconstruction in Digital Breast Tomosynthesis (DBT), where the corresponding data model IS particularly poorly conditioned. The impact of this poor conditioning is that iterative...... algorithms applied to this system can be slow to converge. Recent developments in first-order algorithms are now beginning to allow for accurate solutions to optimization problems of interest to tomographic imaging in general. In particular, we investigate an algorithm developed by Chambolle and Pock (2011 J...

  8. Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter

    Directory of Open Access Journals (Sweden)

    Xiaoxi Yan

    2014-01-01

    Full Text Available As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

  9. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  10. Noise propagation in iterative reconstruction algorithms with line searches

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced

  11. Iterative projection algorithms for ab initio phasing in virus crystallography.

    Science.gov (United States)

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2016-12-01

    Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. The irace package: Iterated racing for automatic algorithm configuration

    Directory of Open Access Journals (Sweden)

    Manuel López-Ibáñez

    2016-01-01

    Full Text Available Modern optimization algorithms typically require the setting of a large number of parameters to optimize their performance. The immediate goal of automatic algorithm configuration is to find, automatically, the best parameter settings of an optimizer. Ultimately, automatic algorithm configuration has the potential to lead to new design paradigms for optimization software. The irace package is a software package that implements a number of automatic configuration procedures. In particular, it offers iterated racing procedures, which have been used successfully to automatically configure various state-of-the-art algorithms. The iterated racing procedures implemented in irace include the iterated F-race algorithm and several extensions and improvements over it. In this paper, we describe the rationale underlying the iterated racing procedures and introduce a number of recent extensions. Among these, we introduce a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances. We experimentally evaluate the most recent version of irace and demonstrate with a number of example applications the use and potential of irace, in particular, and automatic algorithm configuration, in general.

  13. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  14. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    . The key idea of the authors' method is to generate the huge system matrix only once, and store it using sparse matrix techniques. From the sparse matrix one can perform the matrix vector products very fast, which implies a major acceleration of the reconstruction algorithms. Here, the authors demonstrate...... that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  15. On the Convergence of Iterative Receiver Algorithms Utilizing Hard Decisions

    Directory of Open Access Journals (Sweden)

    Jürgen F. Rößler

    2009-01-01

    Full Text Available The convergence of receivers performing iterative hard decision interference cancellation (IHDIC is analyzed in a general framework for ASK, PSK, and QAM constellations. We first give an overview of IHDIC algorithms known from the literature applied to linear modulation and DS-CDMA-based transmission systems and show the relation to Hopfield neural network theory. It is proven analytically that IHDIC with serial update scheme always converges to a stable state in the estimated values in course of iterations and that IHDIC with parallel update scheme converges to cycles of length 2. Additionally, we visualize the convergence behavior with the aid of convergence charts. Doing so, we give insight into possible errors occurring in IHDIC which turn out to be caused by locked error situations. The derived results can directly be applied to those iterative soft decision interference cancellation (ISDIC receivers whose soft decision functions approach hard decision functions in course of the iterations.

  16. A kind of iteration algorithm for fast wave heating

    International Nuclear Information System (INIS)

    Zhu Xueguang; Kuang Guangli; Zhao Yanping; Li Youyi; Xie Jikang

    1998-03-01

    The standard normal distribution for particles in Tokamak geometry is usually assumed in fast wave heating. In fact, due to the quasi-linear diffusion effect, the parallel and vertical temperature of resonant particles is not equal, so, this will bring some error. For this case, the Fokker-Planck equation is introduced, and iteration algorithm is adopted to solve the problem well

  17. Iterative group splitting algorithm for opportunistic scheduling systems

    KAUST Repository

    Nam, Haewoon

    2014-05-01

    An efficient feedback algorithm for opportunistic scheduling systems based on iterative group splitting is proposed in this paper. Similar to the opportunistic splitting algorithm, the proposed algorithm adjusts (or lowers) the feedback threshold during a guard period if no user sends a feedback. However, when a feedback collision occurs at any point of time, the proposed algorithm no longer updates the threshold but narrows down the user search space by dividing the users into multiple groups iteratively, whereas the opportunistic splitting algorithm keeps adjusting the threshold until a single user is found. Since the threshold is only updated when no user sends a feedback, it is shown that the proposed algorithm significantly alleviates the signaling overhead for the threshold distribution to the users by the scheduler. More importantly, the proposed algorithm requires a less number of mini-slots than the opportunistic splitting algorithm to make a user selection with a given level of scheduling outage probability or provides a higher ergodic capacity given a certain number of mini-slots. © 2013 IEEE.

  18. Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis

    Directory of Open Access Journals (Sweden)

    Hyung Jin Shim

    2015-01-01

    Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.

  19. Regularization iteration imaging algorithm for electrical capacitance tomography

    Science.gov (United States)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  20. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  1. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  2. Gaussian beam shooting algorithm based on iterative frame decomposition

    OpenAIRE

    Ghannoum, Ihssan; Letrou, Christine; Beauquet, Gilles

    2010-01-01

    International audience; Adaptive beam re-shooting is proposed as a solution to overcome essential limitations of the Gaussian Beam Shooting technique. The proposed algorithm is based on iterative frame decompositions of beam fields in situations where usual paraxial formulas fail to give accurate enough results, such as interactions with finite obstacle edges. Collimated beam fields are successively re-expanded on narrow and wide window frames, allowing for re-shooting and further propagation...

  3. Weighted iterative algorithm for beam alignment in scanning beam interference lithography.

    Science.gov (United States)

    Song, Ying; Wang, Wei; Jiang, Shan; Bayanheshig; Zhang, Ning

    2017-11-01

    To obtain low phase errors and good interference fringe contrast, an automated beam alignment system is used in scanning beam interference lithography. In the original iterative algorithm, if the initial beam deviation is large or the optical parameters are inappropriate, the beam angle (or position) overshoot may exceed the detector's range. To solve this problem, a weighted iterative algorithm is proposed in which the beam angle and position overshoots can be suppressed by adjusting the weighting coefficients. The original iterative algorithm is introduced. The weighted iterative algorithm is then presented and its convergence is analyzed. Simulation and experimental results show that the proposed weighted iterative algorithm can reduce the beam angle and position overshoots at the expense of convergence speed, avoiding the alignment failure caused by exceeding the detector's range. Besides, the original and weighted iterative algorithms can be combined to optimize the iteration.

  4. Distributed interference alignment iterative algorithms in symmetric wireless network

    Directory of Open Access Journals (Sweden)

    YANG Jingwen

    2015-02-01

    Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.

  5. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  6. Research on Transformer Direct Magnetic Bias Current Calculation Method Based on Field Circuit Iterative Algorithm

    OpenAIRE

    Ning Yao

    2014-01-01

    In order to analyze the DC magnetic bias effect of neutral grounding AC transformer around convertor station grounding electrode, it proposes a new calculation method —field circuit iterative algorithm in this article. The method includes partial iterative algorithm and concentrated iterative algorithm. On the research base of direct injection current calculation methods, field circuit coupling method and resistor network method. Not only the effect of direct convertor station grounding elect...

  7. Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations

    Directory of Open Access Journals (Sweden)

    Hongcai Yin

    2014-01-01

    Full Text Available By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations AX+YB=C and DX+YE=F in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation AXB+CXD=F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.

  8. Policy Iteration for Continuous-Time Average Reward Markov Decision Processes in Polish Spaces

    Directory of Open Access Journals (Sweden)

    Quanxin Zhu

    2009-01-01

    Full Text Available We study the policy iteration algorithm (PIA for continuous-time jump Markov decision processes in general state and action spaces. The corresponding transition rates are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. The criterion that we are concerned with is expected average reward. We propose a set of conditions under which we first establish the average reward optimality equation and present the PIA. Then under two slightly different sets of conditions we show that the PIA yields the optimal (maximum reward, an average optimal stationary policy, and a solution to the average reward optimality equation.

  9. An iterative algorithm for fuzzy mixed production planning based on the cumulative membership function

    Directory of Open Access Journals (Sweden)

    Juan Carlos Figueroa García

    2011-12-01

    The presented approach uses an iterative algorithm which finds stable solutions to problems with fuzzy parameter sinboth sides of an FLP problem. The algorithm is based on the soft constraints method proposed by Zimmermann combined with an iterative procedure which gets a single optimal solution.

  10. Near-Optimal Controller for Nonlinear Continuous-Time Systems With Unknown Dynamics Using Policy Iteration.

    Science.gov (United States)

    Dutta, Samrat; Patchaikani, Prem Kumar; Behera, Laxmidhar

    2016-07-01

    This paper presents a single-network adaptive critic-based controller for continuous-time systems with unknown dynamics in a policy iteration (PI) framework. It is assumed that the unknown dynamics can be estimated using the Takagi-Sugeno-Kang fuzzy model with arbitrary precision. The successful implementation of a PI scheme depends on the effective learning of critic network parameters. Network parameters must stabilize the system in each iteration in addition to approximating the critic and the cost. It is found that the critic updates according to the Hamilton-Jacobi-Bellman formulation sometimes lead to the instability of the closed-loop systems. In the proposed work, a novel critic network parameter update scheme is adopted, which not only approximates the critic at current iteration but also provides feasible solutions that keep the policy stable in the next step of training by combining a Lyapunov-based linear matrix inequalities approach with PI. The critic modeling technique presented here is the first of its kind to address this issue. Though multiple literature exists discussing the convergence of PI, however, to the best of our knowledge, there exists no literature, which focuses on the effect of critic network parameters on the convergence. Computational complexity in the proposed algorithm is reduced to the order of (Fz)(n-1) , where n is the fuzzy state dimensionality and Fz is the number of fuzzy zones in the states space. A genetic algorithm toolbox of MATLAB is used for searching stable parameters while minimizing the training error. The proposed algorithm also provides a way to solve for the initial stable control policy in the PI scheme. The algorithm is validated through real-time experiment on a commercial robotic manipulator. Results show that the algorithm successfully finds stable critic network parameters in real time for a highly nonlinear system.

  11. Research on Transformer Direct Magnetic Bias Current Calculation Method Based on Field Circuit Iterative Algorithm

    Directory of Open Access Journals (Sweden)

    Ning Yao

    2014-08-01

    Full Text Available In order to analyze the DC magnetic bias effect of neutral grounding AC transformer around convertor station grounding electrode, it proposes a new calculation method —field circuit iterative algorithm in this article. The method includes partial iterative algorithm and concentrated iterative algorithm. On the research base of direct injection current calculation methods, field circuit coupling method and resistor network method. Not only the effect of direct convertor station grounding electrode current on substation grounding grid potential, but also the effect of the current of each substation grounding grid on the grounding grid potential of other substation is considered in the field circuit iterative algorithm. Through the analyzing comparison of calculation model, it is proved that field circuit iterative algorithm is more accuracy and adaptative than field-circuit coupling method and resistor network method in the AC power system set by using the equivalent resistance circuit DC path to calculate DC current component of the transformer.

  12. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  13. Iterative algorithms to approximate canonical Gabor windows: Computational aspects

    DEFF Research Database (Denmark)

    Janssen, A.J.E.M; Søndergaard, Peter Lempel

    In this paper we investigate the computational aspects of some recently proposed iterative methods for approximating the canonical tight and canonical dual window of a Gabor frame (g,a,b). The iterations start with the window g while the iteration steps comprise the window g, the k^th iterand...... convergence constants. The iterations, initially formulated for time-continuous Gabor systems, are considered and tested in a discrete setting in which one passes to the appropriately sampled-and-periodized windows and frame operators. Furthermore, they are compared with respect to accuracy and efficiency...

  14. Iterative algorithms to approximate canonieal Gabor windows: Computational aspects

    DEFF Research Database (Denmark)

    Janssen, A. J. E. M.; Søndergaard, Peter Lempel

    2007-01-01

    In this article we investigate the computational aspects of some recently proposed iterative methods for approximating the canonical tight and canonical dual window of a Gabor frame (g, a, b). The iterations start with the window g while the iteration steps comprise the window g, the k(th) iterand...... convergence constants. The iteratious, initially formulated for time-continuous Gabor systems, are considered and tested in a discrete setting in which one passes to the appropriately sampled-and-periodized windows and frame operators. Furthermore, they are compared with respect to accuracy and efficiency...

  15. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  16. Simulating prescribed particle densities in the grand canonical ensemble using iterative algorithms.

    Science.gov (United States)

    Malasics, Attila; Gillespie, Dirk; Boda, Dezso

    2008-03-28

    We present two efficient iterative Monte Carlo algorithms in the grand canonical ensemble with which the chemical potentials corresponding to prescribed (targeted) partial densities can be determined. The first algorithm works by always using the targeted densities in the kT log(rho(i)) (ideal gas) terms and updating the excess chemical potentials from the previous iteration. The second algorithm extrapolates the chemical potentials in the next iteration from the results of the previous iteration using a first order series expansion of the densities. The coefficients of the series, the derivatives of the densities with respect to the chemical potentials, are obtained from the simulations by fluctuation formulas. The convergence of this procedure is shown for the examples of a homogeneous Lennard-Jones mixture and a NaCl-CaCl(2) electrolyte mixture in the primitive model. The methods are quite robust under the conditions investigated. The first algorithm is less sensitive to initial conditions.

  17. A Superlinearly Convergent O(square root of nL)-Iteration Algorithm for Linear Programming

    National Research Council Canada - National Science Library

    Ye, Y; Tapia, Richard A; Zhang, Y

    1991-01-01

    .... We demonstrate that the modified algorithm maintains its O(square root of nL)-iteration complexity, while exhibiting superlinear convergence for general problems and quadratic convergence for nondegenerate problems...

  18. Data loss for PLC of nonlinear systems Iterative Learning Control Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Yinjun

    2016-01-01

    Full Text Available When we use power line as data carrier, due to the complexity of the PLC network environment, data packet loss frequently, so the paper deal with the iterative learning control for a class of nonlinear systems with measurement dropouts in the PLC, and studies the P-type iterative learning control algorithm convergence issues, the data packet loss is described as a stochastic Bernoulli process, on this basis we given convergence conditions for the P-type iterative learning control algorithm. The theoretically analysis is supported by the simulation of a numerical example; the convergence of ILC can be guaranteed when some output measurements are missing.

  19. An iterative two-step algorithm for American option pricing

    Czech Academy of Sciences Publication Activity Database

    Siddiqi, A. H.; Manchanda, P.; Kočvara, Michal

    2000-01-01

    Roč. 11, č. 2 (2000), s. 71-84 ISSN 0953-0061 R&D Projects: GA AV ČR IAA1075707 Institutional research plan: AV0Z1075907 Keywords : American option pricing * linear complementarity * iterative methods Subject RIV: AH - Economics

  20. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  1. [A fast iterative algorithm for adaptive histogram equalization].

    Science.gov (United States)

    Cao, X; Liu, X; Deng, Z; Jiang, D; Zheng, C

    1997-01-01

    In this paper, we propose an iterative algorthm called FAHE., which is based on the relativity between the current local histogram and the one before the sliding window moving. Comparing with the basic AHE, the computing time of FAHE is decreased from 5 hours to 4 minutes on a 486dx/33 compatible computer, when using a 65 x 65 sliding window for a 512 x 512 with 8 bits gray-level range.

  2. An optimal iterative algorithm to solve Cauchy problem for Laplace equation

    KAUST Repository

    Majeed, Muhammad Usman

    2015-05-25

    An optimal mean square error minimizer algorithm is developed to solve severely ill-posed Cauchy problem for Laplace equation on an annulus domain. The mathematical problem is presented as a first order state space-like system and an optimal iterative algorithm is developed that minimizes the mean square error in states. Finite difference discretization schemes are used to discretize first order system. After numerical discretization algorithm equations are derived taking inspiration from Kalman filter however using one of the space variables as a time-like variable. Given Dirichlet and Neumann boundary conditions are used on the Cauchy data boundary and fictitious points are introduced on the unknown solution boundary. The algorithm is run for a number of iterations using the solution of previous iteration as a guess for the next one. The method developed happens to be highly robust to noise in Cauchy data and numerically efficient results are illustrated.

  3. Performance of direct and iterative algorithms on an optical systolic processor

    Science.gov (United States)

    Ghosh, A. K.; Casasent, D.; Neuman, C. P.

    1985-11-01

    The frequency-multiplexed optical linear algebra processor (OLAP) is treated in detail with attention to its performance in the solution of systems of linear algebraic equations (LAEs). General guidelines suitable for most OLAPs, including digital-optical processors, are advanced concerning system and component error source models, guidelines for appropriate use of direct and iterative algorithms, the dominant error sources, and the effect of multiple simultaneous error sources. Specific results are advanced on the quantitative performance of both direct and iterative algorithms in the solution of systems of LAEs and in the solution of nonlinear matrix equations. Acoustic attenuation is found to dominate iterative algorithms and detector noise to dominate direct algorithms. The effect of multiple spatial errors is found to be additive. A theoretical expression for the amount of acoustic attenuation allowed is advanced and verified. Simulations and experimental data are included.

  4. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  5. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  6. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  7. Introduction: a brief overview of iterative algorithms in X-ray computed tomography.

    Science.gov (United States)

    Soleimani, M; Pengpen, T

    2015-06-13

    This paper presents a brief overview of some basic iterative algorithms, and more sophisticated methods are presented in the research papers in this issue. A range of algebraic iterative algorithms are covered here including ART, SART and OS-SART. A major limitation of the traditional iterative methods is their computational time. The Krylov subspace based methods such as the conjugate gradients (CG) algorithm and its variants can be used to solve linear systems of equations arising from large-scale CT with possible implementation using modern high-performance computing tools. The overall aim of this theme issue is to stimulate international efforts to develop the next generation of X-ray computed tomography (CT) image reconstruction software. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  8. Mapping Iterative Medical Imaging Algorithm on Cell Accelerator

    Directory of Open Access Journals (Sweden)

    Meilian Xu

    2011-01-01

    architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE. We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE and eight SIMD coprocessors known as synergetic processor elements (SPEs. The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire ×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm.

  9. Comparing two iteration algorithms of Broyden electron density mixing through an atomic electronic structure computation

    International Nuclear Information System (INIS)

    Zhang Man-Hong

    2016-01-01

    By performing the electronic structure computation of a Si atom, we compare two iteration algorithms of Broyden electron density mixing in the literature. One was proposed by Johnson and implemented in the well-known VASP code. The other was given by Eyert. We solve the Kohn-Sham equation by using a conventional outward/inward integration of the differential equation and then connect two parts of solutions at the classical turning points, which is different from the method of the matrix eigenvalue solution as used in the VASP code. Compared to Johnson’s algorithm, the one proposed by Eyert needs fewer total iteration numbers. (paper)

  10. Iteration Capping For Discrete Choice Models Using the EM Algorithm

    NARCIS (Netherlands)

    Kabatek, J.

    2013-01-01

    The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved

  11. Iter

    Science.gov (United States)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success

  12. Improving search for low energy protein structures with an iterative niche genetic algorithm

    DEFF Research Database (Denmark)

    Helles, Glennie

    2010-01-01

    . Particularly parallel implementations have been demonstrated to generally outperform their sequential counterparts, but they are nevertheless used to a much lesser extent for protein structure prediction. In this work we focus strictly on parallel algorithms for protein structure prediction and propose......In attempts to predict the tertiary structure of proteins we use almost exclusively metaheuristics. However, despite known differences in performance of metaheuristics for different problems, the effect of the choice of metaheuristic has received precious little attention in this field...... that the iterative niche algorithm converges much faster at lower energy structures than both the traditional niche genetic algorithm and the parallel tempering algorithm....

  13. An efficient iterative grand canonical Monte Carlo algorithm to determine individual ionic chemical potentials in electrolytes.

    Science.gov (United States)

    Malasics, Attila; Boda, Dezso

    2010-06-28

    Two iterative procedures have been proposed recently to calculate the chemical potentials corresponding to prescribed concentrations from grand canonical Monte Carlo (GCMC) simulations. Both are based on repeated GCMC simulations with updated excess chemical potentials until the desired concentrations are established. In this paper, we propose combining our robust and fast converging iteration algorithm [Malasics, Gillespie, and Boda, J. Chem. Phys. 128, 124102 (2008)] with the suggestion of Lamperski [Mol. Simul. 33, 1193 (2007)] to average the chemical potentials in the iterations (instead of just using the chemical potentials obtained in the last iteration). We apply the unified method for various electrolyte solutions and show that our algorithm is more efficient if we use the averaging procedure. We discuss the convergence problems arising from violation of charge neutrality when inserting/deleting individual ions instead of neutral groups of ions (salts). We suggest a correction term to the iteration procedure that makes the algorithm efficient to determine the chemical potentials of individual ions too.

  14. SDP Policy Iteration-Based Energy Management Strategy Using Traffic Information for Commuter Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Xiaohong Jiao

    2014-07-01

    Full Text Available This paper demonstrates an energy management method using traffic information for commuter hybrid electric vehicles. A control strategy based on stochastic dynamic programming (SDP is developed, which minimizes on average the equivalent fuel consumption, while satisfying the battery charge-sustaining constraints and the overall vehicle power demand for drivability. First, according to the sample information of the traffic speed profiles, the regular route is divided into several segments and the statistic characteristics in the different segments are constructed from gathered data on the averaged vehicle speeds. And then, the energy management problem is formulated as a stochastic nonlinear and constrained optimal control problem and a modified policy iteration algorithm is utilized to generate a time-invariant state-dependent power split strategy. Finally, simulation results over some driving cycles are presented to demonstrate the effectiveness of the proposed energy management strategy.

  15. Hybrid iterative phase retrieval algorithm based on fusion of intensity information in three defocused planes.

    Science.gov (United States)

    Zeng, Fa; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan

    2007-10-01

    Study of phase retrieval technology is quite meaningful, for its wide applications related to many domains, such as adaptive optics, detection of laser quality, precise measurement of optical surface, and so on. Here a hybrid iterative phase retrieval algorithm is proposed, based on fusion of the intensity information in three defocused planes. First the conjugate gradient algorithm is adapted to achieve a coarse solution of phase distribution in the input plane; then the iterative angular spectrum method is applied in succession for better retrieval result. This algorithm is still applicable even when the exact shape and size of the aperture in the input plane are unknown. Moreover, this algorithm always exhibits good convergence, i.e., the retrieved results are insensitive to the chosen positions of the three defocused planes and the initial guess of complex amplitude in the input plane, which has been proved by both simulations and further experiments.

  16. Feasibility study of the iterative x-ray phase retrieval algorithm

    International Nuclear Information System (INIS)

    Meng Fanbo; Liu Hong; Wu Xizeng

    2009-01-01

    An iterative phase retrieval algorithm was previously investigated for in-line x-ray phase imaging. Through detailed theoretical analysis and computer simulations, we now discuss the limitations, robustness, and efficiency of the algorithm. The iterative algorithm was proved robust against imaging noise but sensitive to the variations of several system parameters. It is also efficient in terms of calculation time. It was shown that the algorithm can be applied to phase retrieval based on one phase-contrast image and one attenuation image, or two phase-contrast images; in both cases, the two images can be obtained either by one detector in two exposures, or by two detectors in only one exposure as in the dual-detector scheme

  17. Registration of range data using a hybrid simulated annealing and iterative closest point algorithm

    Energy Technology Data Exchange (ETDEWEB)

    LUCK,JASON; LITTLE,CHARLES Q.; HOFF,WILLIAM

    2000-04-17

    The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.

  18. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  19. Convergence of SART + OS + TV iterative reconstruction algorithm for optical CT imaging of gel dosimeters

    International Nuclear Information System (INIS)

    Du, Yi; Yu, Gongyi; Xiang, Xincheng; Wang, Xiangang; De Deene, Yves

    2017-01-01

    Computational simulations are used to investigate the convergence of a hybrid iterative algorithm for optical CT reconstruction, i.e. the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization, or SART+OS+TV for short. The influence of parameter selection to reach convergence, spatial dose gradient integrity, MTF and convergent speed are discussed. It’s shown that the results of SART+OS+TV algorithm converge to the true values without significant bias, and MTF and convergent speed are affected by different parameter sets used for iterative calculation. In conclusion, the performance of the SART+OS+TV depends on parameter selection, which also implies that careful parameter tuning work is required and necessary for proper spatial performance and fast convergence. (paper)

  20. Homotopy Iteration Algorithm for Crack Parameters Identification with Composite Element Method

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2013-01-01

    Full Text Available An approach based on homotopy iteration algorithm is proposed to identify the crack parameters in beam structures. In the forward problem, a fully open crack model with the composite element method is employed for the vibration analysis. The dynamic responses of the cracked beam in time domain are obtained from the Newmark direct integration method. In the inverse analysis, an identification approach based on homotopy iteration algorithm is studied to identify the location and the depth of a cracked beam. The identification equation is derived by minimizing the error between the calculated acceleration response and the simulated measured one. Newton iterative method with the homotopy equation is employed to track the correct path and improve the convergence of the crack parameters. Two numerical examples are conducted to illustrate the correctness and efficiency of the proposed method. And the effects of the influencing parameters, such as measurement time duration, measurement points, division of the homotopy parameter and measurement noise, are studied.

  1. Convergence of SART + OS + TV iterative reconstruction algorithm for optical CT imaging of gel dosimeters

    Science.gov (United States)

    Du, Yi; Yu, Gongyi; Xiang, Xincheng; Wang, Xiangang; De Deene, Yves

    2017-05-01

    Computational simulations are used to investigate the convergence of a hybrid iterative algorithm for optical CT reconstruction, i.e. the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization, or SART+OS+TV for short. The influence of parameter selection to reach convergence, spatial dose gradient integrity, MTF and convergent speed are discussed. It’s shown that the results of SART+OS+TV algorithm converge to the true values without significant bias, and MTF and convergent speed are affected by different parameter sets used for iterative calculation. In conclusion, the performance of the SART+OS+TV depends on parameter selection, which also implies that careful parameter tuning work is required and necessary for proper spatial performance and fast convergence.

  2. Implicit and explicit iterative algorithms for hierarchical variational inequality in uniformly smooth Banach spaces.

    Science.gov (United States)

    Ceng, Lu-Chuan; Lur, Yung-Yih; Wen, Ching-Feng

    2017-01-01

    The purpose of this paper is to solve the hierarchical variational inequality with the constraint of a general system of variational inequalities in a uniformly convex and 2-uniformly smooth Banach space. We introduce implicit and explicit iterative algorithms which converge strongly to a unique solution of the hierarchical variational inequality problem. Our results improve and extend the corresponding results announced by some authors.

  3. A new iteration algorithm for solving the diffusion problem in non-differentiable heat transfer

    Directory of Open Access Journals (Sweden)

    Yang Zhifeng

    2015-01-01

    Full Text Available In the article, the variational iteration algorithm LFVIA-II is implemented to solve the diffusion equation occurring in non-differentiable heat transfer. The operators take in sense of the local fractional operators. The obtained results show the fractal behaviors of heat transfer with non-differentiability.

  4. Iterative Observer-based Estimation Algorithms for Steady-State Elliptic Partial Differential Equation Systems

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-19

    Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.

  5. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  6. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  7. Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate

    Directory of Open Access Journals (Sweden)

    We-Duke Cho

    2008-09-01

    Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

  8. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  9. New robust iterative minimum mean squared error-based interference alignment algorithm

    Directory of Open Access Journals (Sweden)

    Sara Teodoro

    2014-02-01

    Full Text Available Interference alignment (IA is a promising technique for multiple input multiple output interference channels based systems, achieving the theoretical bound on degrees of freedom. However, these gains are reduced in the presence of imperfect channel state information (CSI, because of quantisation or channel estimation errors. In this Letter, the authors propose a new robust iterative IA minimum mean squared error-based algorithm, which includes these channel errors in the IA design. The results show that the proposed robust IA algorithm outperforms the known IA-MMSE algorithms, for low-to-moderate variance of CSI errors.

  10. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  11. Performance evaluation of simple linear iterative clustering algorithm on medical image processing.

    Science.gov (United States)

    Cong, Jinyu; Wei, Benzheng; Yin, Yilong; Xi, Xiaoming; Zheng, Yuanjie

    2014-01-01

    Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.

  12. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Okariz, Ana, E-mail: ana.okariz@ehu.es [eMERG, Fisika Aplikatua I Saila, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Guraya, Teresa [eMERG, Departamento de Ingeniería Minera y Metalúrgica y Ciencia de los Materiales, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Iturrondobeitia, Maider [eMERG, Departamento de Expresión Gráfica y Proyectos de Ingeniería, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Ibarretxe, Julen [eMERG, Fisika Aplikatua I Saila, Faculty of Engineering,University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 2, 48013 Bilbao (Spain)

    2017-02-15

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. - Highlights: • The non uniformity of the resolution in electron tomography reconstructions has been demonstrated. • An overall resolution for the evaluation of the quality of electron tomography reconstructions has been defined. • Parameters for estimating an overall resolution across the reconstructed volume have been proposed. • The overall resolution of the reconstructions of a phantom has been estimated from the probability density functions. • It has been proven that reconstructions with the best overall resolutions have provided the most accurate segmentations.

  13. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    Science.gov (United States)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  14. On fast iterative mapping algorithms for stripe based coarse-grained reconfigurable architectures

    Science.gov (United States)

    Mehta, Gayatri; Patel, Krunalkumar; Pollard, Nancy S.

    2015-01-01

    Reconfigurable devices have potential for great flexibility/efficiency, but mapping algorithms onto these architectures is a long-standing challenge. This paper addresses this challenge for stripe based coarse-grained reconfigurable architectures (CGRAs) by drawing on insights from graph drawing. We adapt fast, iterative algorithms from hierarchical graph drawing to the problem of mapping to stripe based architectures. We find that global sifting is 98 times as fast as simulated annealing and produces very compact designs with 17% less area on average, at a cost of 5% greater wire length. Interleaving iterations of Sugiyama and global sifting is 40 times as fast as simulated annealing and achieves somewhat more compact designs with 1.8% less area on average, at a cost of only 1% greater wire length. These solutions can enable fast design space exploration, rapid performance testing, and flexible programming of CGRAs "in the field."

  15. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  16. High resolution reconstruction of PET images using the iterative OSEM algorithm

    International Nuclear Information System (INIS)

    Doll, J.; Bublitz, O.; Werling, A.; Haberkorn, U.; Semmler, W.; Adam, L.E.; Pennsylvania Univ., Philadelphia, PA; Brix, G.

    2004-01-01

    Aim: Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. Methods: All measurements were performed at the whole-body PET system ECAT EXACT HR + in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Results: Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. Conclusion: The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals. (orig.)

  17. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    Science.gov (United States)

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  18. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    Science.gov (United States)

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  19. Second-order p-iterative solution of the Lambert/Gauss problem. [algorithm for efficient orbit determination

    Science.gov (United States)

    Boltz, F. W.

    1984-01-01

    An algorithm is presented for efficient p-iterative solution of the Lambert/Gauss orbit-determination problem using second-order Newton iteration. The algorithm is based on a universal transformation of Kepler's time-of-flight equation and approximate inverse solutions of this equation for short-way and long-way flight paths. The approximate solutions provide both good starting values for iteration and simplified computation of the second-order term in the iteration formula. Numerical results are presented which indicate that in many cases of practical significance (except those having collinear position vectors) the algorithm produces at least eight significant digits of accuracy with just two or three steps of iteration.

  20. An iterative algorithm for solving the multidimensional neutron diffusion nodal method equations on parallel computers

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1992-01-01

    In this paper the one-group, steady-state neutron diffusion equation in two-dimensional Cartesian geometry is solved using the nodal integral method. The discrete variable equations comprise loosely coupled sets of equations representing the nodal balance of neutrons, as well as neutron current continuity along rows or columns of computational cells. An iterative algorithm that is more suitable for solving large problems concurrently is derived based on the decomposition of the spatial domain and is accelerated using successive overrelaxation. This algorithm is very well suited for parallel computers, especially since the spatial domain decomposition occurs naturally, so that the number of iterations required for convergence does not depend on the number of processors participating in the calculation. Implementation of the authors' algorithm on the Intel iPSC/2 hypercube and Sequent Balance 8000 parallel computer is presented, and measured speedup and efficiency for test problems are reported. The results suggest that the efficiency of the hypercube quickly deteriorates when many processors are used, while the Sequent Balance retains very high efficiency for a comparable number of participating processors. This leads to the conjecture that message-passing parallel computers are not as well suited for this algorithm as shared-memory machines

  1. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm.

    Directory of Open Access Journals (Sweden)

    Higinio Mora

    Full Text Available The Iterative Closest Point (ICP algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results.

  2. Scheduling of Iterative Algorithms with Matrix Operations for Efficient FPGA Design—Implementation of Finite Interval Constant Modulus Algorithm

    Czech Academy of Sciences Publication Activity Database

    Šůcha, P.; Hanzálek, Z.; Heřmánek, Antonín; Schier, Jan

    2007-01-01

    Roč. 46, č. 1 (2007), s. 35-53 ISSN 0922-5773 R&D Projects: GA AV ČR(CZ) 1ET300750402; GA MŠk(CZ) 1M0567; GA MPO(CZ) FD-K3/082 Institutional research plan: CEZ:AV0Z10750506 Keywords : high-level synthesis * cyclic scheduling * iterative algorithms * imperfectly nested loops * integer linear programming * FPGA * VLSI design * blind equalization * implementation Subject RIV: BA - General Mathematics Impact factor: 0.449, year: 2007 http://www.springerlink.com/content/t217kg0822538014/fulltext.pdf

  3. Designing an Iterative Learning Control Algorithm Based on Process History using limited post process geometrical information

    DEFF Research Database (Denmark)

    Endelt, Benny Ørtoft; Volk, Wolfram

    2013-01-01

    Feedback control of sheet metal forming operations has been an active research field the last two decades and highly advanced control algorithms have been proposed - controlling both the total blank-holder force and in some cases also the distribution of the blank-holder force. However, there is ......Feedback control of sheet metal forming operations has been an active research field the last two decades and highly advanced control algorithms have been proposed - controlling both the total blank-holder force and in some cases also the distribution of the blank-holder force. However......, the reaction speed may be insufficient compared to the production rate in an industrial application. We propose to design an iterative learning control (ILC) algorithm which can control and update the blank-holder force as well as the distribution of the blank-holder force based on limited geometric data from...

  4. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    Science.gov (United States)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  5. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    Science.gov (United States)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  6. Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error.

    Science.gov (United States)

    Maier-Hein, Lena; Franz, Alfred M; dos Santos, Thiago R; Schmidt, Mirko; Fangerau, Markus; Meinzer, Hans-Peter; Fitzpatrick, J Michael

    2012-08-01

    Since its introduction in the early 1990s, the Iterative Closest Point (ICP) algorithm has become one of the most well-known methods for geometric alignment of 3D models. Given two roughly aligned shapes represented by two point sets, the algorithm iteratively establishes point correspondences given the current alignment of the data and computes a rigid transformation accordingly. From a statistical point of view, however, it implicitly assumes that the points are observed with isotropic Gaussian noise. In this paper, we show that this assumption may lead to errors and generalize the ICP such that it can account for anisotropic and inhomogenous localization errors. We 1) provide a formal description of the algorithm, 2) extend it to registration of partially overlapping surfaces, 3) prove its convergence, 4) derive the required covariance matrices for a set of selected applications, and 5) present means for optimizing the runtime. An evaluation on publicly available surface meshes as well as on a set of meshes extracted from medical imaging data shows a dramatic increase in accuracy compared to the original ICP, especially in the case of partial surface registration. As point-based surface registration is a central component in various applications, the potential impact of the proposed method is high.

  7. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Yazan M. Alomari

    2014-01-01

    Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.

  8. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  9. Iterative local Chi2 alignment algorithm for the ATLAS Pixel detector

    CERN Document Server

    Göttfert, Tobias

    The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.

  10. Improvement of image quality of holographic projection on tilted plane using iterative algorithm

    Science.gov (United States)

    Pang, Hui; Cao, Axiu; Wang, Jiazhou; Zhang, Man; Deng, Qiling

    2017-12-01

    Holographic image projection on tilted plane has an important application prospect. In this paper, we propose a method to compute the phase-only hologram that can reconstruct a clear image on tilted plane. By adding a constant phase to the target image of the inclined plane, the corresponding light field distribution on the plane that is parallel to the hologram plane is derived through the titled diffraction calculation. Then the phase distribution of the hologram is obtained by the iterative algorithm with amplitude and phase constrain. Simulation and optical experiment are performed to show the effectiveness of the proposed method.

  11. A Design Algorithm using External Perturbation to Improve Iterative Feedback Tuning Convergence

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Hjalmarsson, Håkan; Poulsen, Niels Kjølstad

    2011-01-01

    Iterative Feedback Tuning constitutes an attractive control loop tuning method for processes in the absence of process insight. It is a purely data driven approach for optimization of the loop performance. The standard formulation ensures an unbiased estimate of the loop performance cost function...... information content by introducing an optimal perturbation signal in the tuning algorithm. The theoretical analysis is supported by a simulation example where the proposed method is compared to an existing method for acceleration of the convergence by use of optimal prefilters....

  12. Convergence of an Iterative Algorithm for Common Solutions for Zeros of Maximal Accretive Operator with Applications

    Directory of Open Access Journals (Sweden)

    Uamporn Witthayarat

    2012-01-01

    Full Text Available The aim of this paper is to introduce an iterative algorithm for finding a common solution of the sets (A+M2−1(0 and (B+M1−1(0, where M is a maximal accretive operator in a Banach space and, by using the proposed algorithm, to establish some strong convergence theorems for common solutions of the two sets above in a uniformly convex and 2-uniformly smooth Banach space. The results obtained in this paper extend and improve the corresponding results of Qin et al. 2011 from Hilbert spaces to Banach spaces and Petrot et al. 2011. Moreover, we also apply our results to some applications for solving convex feasibility problems.

  13. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  14. Iterated Local Search Algorithm with Strategic Oscillation for School Bus Routing Problem with Bus Stop Selection

    Directory of Open Access Journals (Sweden)

    Mohammad Saied Fallah Niasar

    2017-02-01

    Full Text Available he school bus routing problem (SBRP represents a variant of the well-known vehicle routing problem. The main goal of this study is to pick up students allocated to some bus stops and generate routes, including the selected stops, in order to carry students to school. In this paper, we have proposed a simple but effective metaheuristic approach that employs two features: first, it utilizes large neighborhood structures for a deeper exploration of the search space; second, the proposed heuristic executes an efficient transition between the feasible and infeasible portions of the search space. Exploration of the infeasible area is controlled by a dynamic penalty function to convert the unfeasible solution into a feasible one. Two metaheuristics, called N-ILS (a variant of the Nearest Neighbourhood with Iterated Local Search algorithm and I-ILS (a variant of Insertion with Iterated Local Search algorithm are proposed to solve SBRP. Our experimental procedure is based on the two data sets. The results show that N-ILS is able to obtain better solutions in shorter computing times. Additionally, N-ILS appears to be very competitive in comparison with the best existing metaheuristics suggested for SBRP

  15. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    Energy Technology Data Exchange (ETDEWEB)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca [Département de physique, de génie physique et d’optique, Université Laval, Québec, Québec G1V 0A6 (Canada); Goussard, Yves, E-mail: yves.goussard@polymtl.ca [Département de génie électrique/Institut de génie biomédical, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville, Montréal, Québec H3C 3A7 (Canada); Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca [Département de physique, de génie physique et d’optique and Centre de recherche sur le cancer, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie and Centre de recherche du CHU de Québec, Québec, Québec G1R 2J6 (Canada)

    2015-11-15

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  16. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  17. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    International Nuclear Information System (INIS)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-01-01

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  18. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  19. A Quadratically Convergent O(square root of nL-Iteration Algorithm for Linear Programming

    National Research Council Canada - National Science Library

    Ye, Y; Gueler, O; Tapia, Richard A; Zhang, Y

    1991-01-01

    ...)-iteration complexity while exhibiting superlinear convergence of the duality gap to zero under the assumption that the iteration sequence converges, and quadratic convergence of the duality gap...

  20. Evaluation of global synchronization for iterative algebra algorithms on many-core

    KAUST Repository

    ul Hasan Khan, Ayaz

    2015-06-01

    © 2015 IEEE. Massively parallel computing is applied extensively in various scientific and engineering domains. With the growing interest in many-core architectures and due to the lack of explicit support for inter-block synchronization specifically in GPUs, synchronization becomes necessary to minimize inter-block communication time. In this paper, we have proposed two new inter-block synchronization techniques: 1) Relaxed Synchronization, and 2) Block-Query Synchronization. These schemes are used in implementing numerical iterative solvers where computation/communication overlapping is one used optimization to enhance application performance. We have evaluated and analyzed the performance of the proposed synchronization techniques using Jacobi Iterative Solver in comparison to the state of the art inter-block lock-free synchronization techniques. We have achieved about 1-8% performance improvement in terms of execution time over lock-free synchronization depending on the problem size and the number of thread blocks. We have also evaluated the proposed algorithm on GPU and MIC architectures and obtained about 8-26% performance improvement over the barrier synchronization available in OpenMP programming environment depending on the problem size and number of cores used.

  1. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    Science.gov (United States)

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P  = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P  < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  2. MPPT-Based Control Algorithm for PV System Using iteration-PSO under Irregular shadow Conditions

    Directory of Open Access Journals (Sweden)

    M. Abdulkadir

    2017-02-01

    Full Text Available The conventional maximum power point tracking (MPPT techniques can hardly track the global maximum power point (GMPP because the power-voltage characteristics of photovoltaic (PV exhibit multiple local peaks in irregular shadow, and therefore easily fall into the local maximum power point. These conditions make it very challenging, and to tackle this deficiency, an efficient Iteration Particle Swarm Optimization (IPSO has been developed to improve the quality of solution and convergence speed of the traditional PSO, so that it can effectively track the GMPP under irregular shadow conditions. This proposed technique has such advantages as simple structure, fast response and strong robustness, and convenient implementation. It is applied to MPPT control of PV system in irregular shadow to solve the problem of multi-peak optimization in partial shadow. In order to verify the rationality of the proposed algorithm, however, recently the dynamic MPPT performance under varying irradiance conditions has been given utmost attention to the PV society. As the European standard EN 50530 which defines the recommended varying irradiance profiles, was released lately, the corresponding researchers have been required to improve the dynamic MPPT performance. This paper tried to evaluate the dynamic MPPT performance using EN 50530 standard. The simulation results show that iterative-PSO method can fast track the global MPP, increase tracking speed and higher dynamic MPPT efficiency under EN 50530 than the conventional PSO.

  3. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  4. An Iterative Algorithm for the Split Equality and Multiple-Sets Split Equality Problem

    Directory of Open Access Journals (Sweden)

    Luoyi Shi

    2014-01-01

    Full Text Available The multiple-sets split equality problem (MSSEP requires finding a point x∈∩i=1NCi, y∈∩j=1MQj such that Ax=By, where N and M are positive integers, {C1,C2,…,CN} and {Q1,Q2,…,QM} are closed convex subsets of Hilbert spaces H1, H2, respectively, and A:H1→H3, B:H2→H3 are two bounded linear operators. When N=M=1, the MSSEP is called the split equality problem (SEP. If  B=I, then the MSSEP and SEP reduce to the well-known multiple-sets split feasibility problem (MSSFP and split feasibility problem (SFP, respectively. One of the purposes of this paper is to introduce an iterative algorithm to solve the SEP and MSSEP in the framework of infinite-dimensional Hilbert spaces under some more mild conditions for the iterative coefficient.

  5. Finding the magnetic size distribution of magnetic nanoparticles from magnetization measurements via the iterative Kaczmarz algorithm

    Science.gov (United States)

    Schmidt, Daniel; Eberbeck, Dietmar; Steinhoff, Uwe; Wiekhorst, Frank

    2017-06-01

    The characterization of the size distribution of magnetic nanoparticles is an important step for the evaluation of their suitability for many different applications like magnetic hyperthermia, drug targeting or Magnetic Particle Imaging. We present a new method based on the iterative Kaczmarz algorithm that enables the reconstruction of the size distribution from magnetization measurements without a priori knowledge of the distribution form. We show in simulations that the method is capable of very exact reconstructions of a given size distribution and, in that, is highly robust to noise contamination. Moreover, we applied the method on the well characterized FeraSpin™ series and obtained results that were in accordance with literature and boundary conditions based on their synthesis via separation of the original suspension FeraSpin R. It is therefore concluded that this method is a powerful and intuitive tool for reconstructing particle size distributions from magnetization measurements.

  6. Multi-excitation Raman difference spectroscopy based on modified multi-energy constrained iterative deconvolution algorithm

    Science.gov (United States)

    Zou, Wenlong; Cai, Zhijian; Zhou, Hongwu; Wu, Jianhong

    2013-12-01

    Raman spectroscopy is fast and nondestructive, and it is widely used in chemistry, biomedicine, food safety and other areas. However, Raman spectroscopy is often hampered by strong fluorescence background, especially in food additives detection and biomedicine researching. In this paper, one efficient technique was the multi-excitation Raman difference spectroscopy (MERDS) which incorporated a series of small wavelength-shift wavelengths as excitation sources. A modified multi-energy constrained iterative deconvolution (MMECID) algorithm was proposed to reconstruct the Raman Spectroscopy. Computer simulation and experiments both demonstrated that the Raman spectrum can be well reconstructed from large fluorescence background. The more excitation sources used, the better signal to noise ratio got. However, many excitation sources were equipped on the Raman spectrometer, which increased the complexity of the experimental system. Thus, a trade-off should be made between the number of excitation frequencies and experimental complexity.

  7. An optimization iterative algorithm based on nonnegative constraint with application to Allan variance analysis technique

    Science.gov (United States)

    Lv, Hanfeng; Zhang, Liang; Wang, Dingjie; Wu, Jie

    2014-03-01

    It is well known that inertial integrated navigation systems can provide accurate navigation information. In these systems, inertial sensor random error often becomes the limiting factor to get a better performance. So it is imperative to have accurate characterization of the random error. Allan variance analysis technique has a good performance in analyzing inertial sensor random error, and it is always used to characterize various types of the random error terms. This paper proposes a new method named optimization iterative algorithm based on nonnegative constraint applied to Allan variance analysis technique to estimate parameters of the random error terms. The parameter estimates by this method are nonnegative and optimal, and the estimation process does not have matrix nearly singular issues. Testing with simulation data and the experimental data of a fiber optical gyro, the parameters estimated by the presented method are compared against other excellent methods with good agreement; moreover, the objective function has the minimum value.

  8. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    Science.gov (United States)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  9. MAPCUMBA: A fast iterative multi-grid map-making algorithm for CMB experiments

    Science.gov (United States)

    Doré, O.; Teyssier, R.; Bouchet, F. R.; Vibert, D.; Prunet, S.

    2001-07-01

    The data analysis of current Cosmic Microwave Background (CMB) experiments like BOOMERanG or MAXIMA poses severe challenges which already stretch the limits of current (super-) computer capabilities, if brute force methods are used. In this paper we present a practical solution for the optimal map making problem which can be used directly for next generation CMB experiments like ARCHEOPS and TopHat, and can probably be extended relatively easily to the full PLANCK case. This solution is based on an iterative multi-grid Jacobi algorithm which is both fast and memory sparing. Indeed, if there are Ntod data points along the one dimensional timeline to analyse, the number of operations is of O (Ntod \\ln Ntod) and the memory requirement is O (Ntod). Timing and accuracy issues have been analysed on simulated ARCHEOPS and TopHat data, and we discuss as well the issue of the joint evaluation of the signal and noise statistical properties.

  10. Iterative algorithms for the input and state recovery from the approximate inverse of strictly proper multivariable systems

    Science.gov (United States)

    Chen, Liwen; Xu, Qiang

    2018-02-01

    This paper proposes new iterative algorithms for the unknown input and state recovery from the system outputs using an approximate inverse of the strictly proper linear time-invariant (LTI) multivariable system. One of the unique advantages from previous system inverse algorithms is that the output differentiation is not required. The approximate system inverse is stable due to the systematic optimal design of a dummy feedthrough D matrix in the state-space model via the feedback stabilization. The optimal design procedure avoids trial and error to identify such a D matrix which saves tremendous amount of efforts. From the derived and proved convergence criteria, such an optimal D matrix also guarantees the convergence of algorithms. Illustrative examples show significant improvement of the reference input signal tracking by the algorithms and optimal D design over non-iterative counterparts on controllable or stabilizable LTI systems, respectively. Case studies of two Boeing-767 aircraft aerodynamic models further demonstrate the capability of the proposed methods.

  11. Inertial measurement unit–based iterative pose compensation algorithm for low-cost modular manipulator

    Directory of Open Access Journals (Sweden)

    Yunhan Lin

    2016-01-01

    Full Text Available It is a necessary mean to realize the accurate motion control of the manipulator which uses end-effector pose correction method and compensation method. In this article, first, we established the kinematic model and error model of the modular manipulator (WUST-ARM, and then we discussed the measurement methods and precision of the inertial measurement unit sensor. The inertial measurement unit sensor is mounted on the end-effector of modular manipulator, to get the real-time pose of the end-effector. At last, a new inertial measurement unit–based iterative pose compensation algorithm is proposed. By applying this algorithm in the pose compensation experiment of modular manipulator which is composed of low-cost rotation joints, the results show that the inertial measurement unit can obtain a higher precision when in static state; it will accurately feedback to the control system with an accurate error compensation angle after a brief delay when the end-effector moves to the target point, and after compensation, the precision errors of roll angle, pitch angle, and yaw angle are reached at 0.05°, 0.01°, and 0.27° respectively. It proves that this low-cost method provides a new solution to improve the end-effector pose of low-cost modular manipulator.

  12. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  13. Application of iterative phase-retrieval algorithms to ARPES orbital tomography

    Science.gov (United States)

    Kliuiev, P.; Latychevskaia, T.; Osterwalder, J.; Hengsberger, M.; Castiglioni, L.

    2016-09-01

    Electronic wave functions of planar molecules can be reconstructed via inverse Fourier transform of angle-resolved photoelectron spectroscopy (ARPES) data, provided the phase of the electron wave in the detector plane is known. Since the recorded intensity is proportional to the absolute square of the Fourier transform of the initial state wave function, information about the phase distribution is lost in the measurement. It was shown that the phase can be retrieved in some cases by iterative algorithms using a priori information about the object such as its size and symmetry. We suggest a more generalized and robust approach for the reconstruction of molecular orbitals based on state-of-the-art phase-retrieval algorithms currently used in coherent diffraction imaging (CDI). We draw an analogy between the phase problem in molecular orbital imaging by ARPES and of that in optical CDI by performing an optical analogue experiment on micrometer-sized structures. We successfully reconstruct amplitude and phase of both the micrometer-sized objects and a molecular orbital from the optical and photoelectron far-field intensity distributions, respectively, without any prior information about the shape of the objects.

  14. Finding the magnetic size distribution of magnetic nanoparticles from magnetization measurements via the iterative Kaczmarz algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Daniel, E-mail: frank.wiekhorst@ptb.de; Eberbeck, Dietmar; Steinhoff, Uwe; Wiekhorst, Frank

    2017-06-01

    The characterization of the size distribution of magnetic nanoparticles is an important step for the evaluation of their suitability for many different applications like magnetic hyperthermia, drug targeting or Magnetic Particle Imaging. We present a new method based on the iterative Kaczmarz algorithm that enables the reconstruction of the size distribution from magnetization measurements without a priori knowledge of the distribution form. We show in simulations that the method is capable of very exact reconstructions of a given size distribution and, in that, is highly robust to noise contamination. Moreover, we applied the method on the well characterized FeraSpin™ series and obtained results that were in accordance with literature and boundary conditions based on their synthesis via separation of the original suspension FeraSpin R. It is therefore concluded that this method is a powerful and intuitive tool for reconstructing particle size distributions from magnetization measurements. - Highlights: • A new method for the size distribution fit of magnetic nanoparticles is proposed. • Employed Kaczmarz algorithm does not need a priori input or eigenwert regularization. • The method is highly robust to noise contamination. • Size distributions are reconstructed from simulated and measured magnetization curves.

  15. The impact of CT radiation dose reduction and iterative reconstruction algorithms from four different vendors on coronary calcium scoring

    NARCIS (Netherlands)

    Willemink, M.J.; Takx, R.A.P.; Jong, P.A. de; Budde, R.P.; Bleys, R.L.; Das, M.; Wildberger, J.E.; Prokop, M.; Buls, N.; Mey, J. de; Schilham, A.M.; Leiner, T.

    2014-01-01

    To analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS).Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels

  16. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  17. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    Directory of Open Access Journals (Sweden)

    Helle Precht

    2016-12-01

    Full Text Available Background Coronary computed tomography angiography (CCTA requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR enhances perceived image quality in CCTA compared to filtered back projection (FBP. Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR] was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004. The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001 for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  18. An Iterative Learning Algorithm to Map Oil Palm Plantations from Synthetic Aperture Radar and Crowdsourcing

    Science.gov (United States)

    Pinto, N.; Zhang, Z.; Perger, C.; Aguilar-Amuchastegui, N.; Almeyda Zambrano, A. M.; Broadbent, E. N.; Simard, M.; Banerjee, S.

    2017-12-01

    The oil palm Elaeis spp. grows exclusively in the tropics and provides 30% of the world's vegetable oil. While oil palm-derived biodiesel can reduce carbon emissions from fossil fuels, plantation establishment may be associated with peat fires and deforestation. The ability to monitor plantation establishment and their expansion over carbon-rich tropical forests is critical for quantifying the net impact of oil palm commodities on carbon fluxes. Our objective is to develop a robust methodology to map oil palm plantations in tropical biomes, based on Synthetic Aperture Radar (SAR) from Sentinel-1, ALOS/PALSAR2, and UAVSAR. The C- and L-band signal from these instruments are sensitive to vegetation parameters such as canopy volume, trunk shape, and trunk spatial arrangement, that are critical to differentiate crops from forests and native palms. Based on Bayesian statistics, the learning algorithm employed here adapts to growing knowledge as sites and trainning points are added. We will present an iterative approach wherein a model is initially built at the site with the most training points - in our case, Costa Rica. Model posteriors from Costa Rica, depicting polarimetric signatures of oil palm plantations, are then used as priors in a classification exercise taking place in South Kalimantan. Results are evaluated by local researchers using the LACO Wiki interface. All validation points, including missclassified sites, are used in an additional iteration to improve model results to >90% overall accuracy. We report on the impact of plantation age on polarimetric signatures, and we also compare model performance with and without L-band data.

  19. Improved blood velocity measurements with a hybrid image filtering and iterative Radon transform algorithm.

    Science.gov (United States)

    Chhatbar, Pratik Y; Kara, Prakash

    2013-01-01

    Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature.

  20. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    Science.gov (United States)

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  1. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  2. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  3. A compressed sensing-based iterative algorithm for CT reconstruction and its possible application to phase contrast imaging

    Directory of Open Access Journals (Sweden)

    Li Xueli

    2011-08-01

    Full Text Available Abstract Background Computed Tomography (CT is a technology that obtains the tomogram of the observed objects. In real-world applications, especially the biomedical applications, lower radiation dose have been constantly pursued. To shorten scanning time and reduce radiation dose, one can decrease X-ray exposure time at each projection view or decrease the number of projections. Until quite recently, the traditional filtered back projection (FBP method has been commonly exploited in CT image reconstruction. Applying the FBP method requires using a large amount of projection data. Especially when the exposure speed is limited by the mechanical characteristic of the imaging facilities, using FBP method may prolong scanning time and cumulate with a high dose of radiation consequently damaging the biological specimens. Methods In this paper, we present a compressed sensing-based (CS-based iterative algorithm for CT reconstruction. The algorithm minimizes the l1-norm of the sparse image as the constraint factor for the iteration procedure. With this method, we can reconstruct images from substantially reduced projection data and reduce the impact of artifacts introduced into the CT reconstructed image by insufficient projection information. Results To validate and evaluate the performance of this CS-base iterative algorithm, we carried out quantitative evaluation studies in imaging of both software Shepp-Logan phantom and real polystyrene sample. The former is completely absorption based and the later is imaged in phase contrast. The results show that the CS-based iterative algorithm can yield images with quality comparable to that obtained with existing FBP and traditional algebraic reconstruction technique (ART algorithms. Discussion Compared with the common reconstruction from 180 projection images, this algorithm completes CT reconstruction from only 60 projection images, cuts the scan time, and maintains the acceptable quality of the

  4. Uncertainty Footprint: Visualization of Nonuniform Behavior of Iterative Algorithms Applied to 4D Cell Tracking.

    Science.gov (United States)

    Wan, Y; Hansen, C

    2017-06-01

    Research on microscopy data from developing biological samples usually requires tracking individual cells over time. When cells are three-dimensionally and densely packed in a time-dependent scan of volumes, tracking results can become unreliable and uncertain. Not only are cell segmentation results often inaccurate to start with, but it also lacks a simple method to evaluate the tracking outcome. Previous cell tracking methods have been validated against benchmark data from real scans or artificial data, whose ground truth results are established by manual work or simulation. However, the wide variety of real-world data makes an exhaustive validation impossible. Established cell tracking tools often fail on new data, whose issues are also difficult to diagnose with only manual examinations. Therefore, data-independent tracking evaluation methods are desired for an explosion of microscopy data with increasing scale and resolution. In this paper, we propose the uncertainty footprint, an uncertainty quantification and visualization technique that examines nonuniformity at local convergence for an iterative evaluation process on a spatial domain supported by partially overlapping bases. We demonstrate that the patterns revealed by the uncertainty footprint indicate data processing quality in two algorithms from a typical cell tracking workflow - cell identification and association. A detailed analysis of the patterns further allows us to diagnose issues and design methods for improvements. A 4D cell tracking workflow equipped with the uncertainty footprint is capable of self diagnosis and correction for a higher accuracy than previous methods whose evaluation is limited by manual examinations.

  5. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  6. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    Science.gov (United States)

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  7. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  8. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  9. Iterating skeletons

    DEFF Research Database (Denmark)

    Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost

    2012-01-01

    a particular skeleton ad-hoc for repeated execution turns out to be considerably complicated, and raises general questions about introducing state into a stateless parallel computation. In addition, one would strongly prefer an approach which leaves the original skeleton intact, and only uses it as a building...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...

  10. Dynamic Analysis of the High Speed Train and Slab Track Nonlinear Coupling System with the Cross Iteration Algorithm

    OpenAIRE

    Xiaoyan Lei; Shenhua Wu; Bin Zhang

    2016-01-01

    A model for dynamic analysis of the vehicle-track nonlinear coupling system is established by the finite element method. The whole system is divided into two subsystems: the vehicle subsystem and the track subsystem. Coupling of the two subsystems is achieved by equilibrium conditions for wheel-to-rail nonlinear contact forces and geometrical compatibility conditions. To solve the nonlinear dynamics equations for the vehicle-track coupling system, a cross iteration algorithm and a relaxation ...

  11. The viscosity iterative algorithms for the implicit midpoint rule of nonexpansive mappings in uniformly smooth Banach spaces.

    Science.gov (United States)

    Luo, Ping; Cai, Gang; Shehu, Yekini

    2017-01-01

    The aim of this paper is to introduce a viscosity iterative algorithm for the implicit midpoint rule of nonexpansive mappings in uniformly smooth spaces. Under some appropriate conditions on the parameters, we prove some strong convergence theorems. As applications, we apply our main results to solving fixed point problems of strict pseudocontractive mappings, variational inequality problems in Banach spaces and equilibrium problems in Hilbert spaces. Finally, we give some numerical examples for supporting our main results.

  12. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Durodié, Frédéric, E-mail: frederic.durodie@rma.ac.be; Křivská, Alena [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); Dumortier, Pierre; Lerche, Ernesto [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); JET, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Helou, Walid [CEA, IRFM, F-13108 St-Paul-Lez-Durance (France); Collaboration: EUROfusion Consortium

    2015-12-10

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  13. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    Science.gov (United States)

    Durodié, Frédéric; Dumortier, Pierre; Helou, Walid; Křivská, Alena; Lerche, Ernesto

    2015-12-01

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  14. Enhancing Video Games Policy Based on Least-Squares Continuous Action Policy Iteration: Case Study on StarCraft Brood War and Glest RTS Games and the 8 Queens Board Game

    Directory of Open Access Journals (Sweden)

    Shahenda Sarhan

    2016-01-01

    Full Text Available With the rapid advent of video games recently and the increasing numbers of players and gamers, only a tough game with high policy, actions, and tactics survives. How the game responds to opponent actions is the key issue of popular games. Many algorithms were proposed to solve this problem such as Least-Squares Policy Iteration (LSPI and State-Action-Reward-State-Action (SARSA but they mainly depend on discrete actions, while agents in such a setting have to learn from the consequences of their continuous actions, in order to maximize the total reward over time. So in this paper we proposed a new algorithm based on LSPI called Least-Squares Continuous Action Policy Iteration (LSCAPI. The LSCAPI was implemented and tested on three different games: one board game, the 8 Queens, and two real-time strategy (RTS games, StarCraft Brood War and Glest. The LSCAPI evaluation proved superiority over LSPI in time, policy learning ability, and effectiveness.

  15. Clinical evaluation of the iterative metal artifact reduction algorithm for CT simulation in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Axente, Marian; Von Eyben, Rie; Hristov, Dimitre, E-mail: dimitre.hristov@stanford.edu [Radiation Oncology, Stanford Hospital and Clinics, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States); Paidi, Ajay; Bani-Hashemi, Ali [Computed Tomography and Radiation Oncology Department, Siemens Medical Solutions USA, 757A Arnold Drive, Martinez, California 94553 (United States); Zeng, Chuan [Radiation Oncology, University of Pennsylvania, 3400 Civic Center Boulevard, Philadelphia, Pennsylvania 19104 (United States); Krauss, Andreas [Imaging and Therapy Division, Siemens AG, Healthcare Sector, Siemensstr. 1, Forcheim 91301 (Germany)

    2015-03-15

    Purpose: To clinically evaluate an iterative metal artifact reduction (IMAR) algorithm prototype in the radiation oncology clinic setting by testing for accuracy in CT number retrieval, relative dosimetric changes in regions affected by artifacts, and improvements in anatomical and shape conspicuity of corrected images. Methods: A phantom with known material inserts was scanned in the presence/absence of metal with different configurations of placement and sizes. The relative change in CT numbers from the reference data (CT with no metal) was analyzed. The CT studies were also used for dosimetric tests where dose distributions from both photon and proton beams were calculated. Dose differences and gamma analysis were calculated to quantify the relative changes between doses calculated on the different CT studies. Data from eight patients (all different treatment sites) were also used to quantify the differences between dose distributions before and after correction with IMAR, with no reference standard. A ranking experiment was also conducted to analyze the relative confidence of physicians delineating anatomy in the near vicinity of the metal implants. Results: IMAR corrected images proved to accurately retrieve CT numbers in the phantom study, independent of metal insert configuration, size of the metal, and acquisition energy. For plastic water, the mean difference between corrected images and reference images was −1.3 HU across all scenarios (N = 37) with a 90% confidence interval of [−2.4, −0.2] HU. While deviations were relatively higher in images with more metal content, IMAR was able to effectively correct the CT numbers independent of the quantity of metal. Residual errors in the CT numbers as well as some induced by the correction algorithm were found in the IMAR corrected images. However, the dose distributions calculated on IMAR corrected images were closer to the reference data in phantom studies. Relative spatial difference in the dose

  16. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    International Nuclear Information System (INIS)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-01-01

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose 4 and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose 4 at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility

  17. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  18. An iterative algorithm for computing aeroacoustic integrals with application to the analysis of free shear flow noise.

    Science.gov (United States)

    Margnat, Florent; Fortuné, Véronique

    2010-10-01

    An iterative algorithm is developed for the computation of aeroacoustic integrals in the time domain. It is specially designed for the generation of acoustic images, thus giving access to the wavefront pattern radiated by an unsteady flow when large size source fields are considered. It is based on an iterative selection of source-observer pairs involved in the radiation process at a given time-step. It is written as an advanced-time approach, allowing easy connection with flow simulation tools. Its efficiency is related to the fraction of an observer grid step that a sound-wave covers during one time step. Test computations were performed, showing the CPU-time to be 30 to 50 times smaller than with a classical non-iterative procedure. The algorithm is applied to compute the sound radiated by a spatially evolving mixing-layer flow: it is used to compute and visualize contributions to the acoustic field from the different terms obtained by a decomposition of the Lighthill source term.

  19. Active control of repetitive impulsive noise in a non-minimum phase system using an optimal iterative learning control algorithm

    Science.gov (United States)

    Zhou, Y. L.; Yin, Y. X.; Zhang, Q. Z.

    2013-09-01

    In this paper, active control of repetitive impulsive noise is studied. An optimal iterative learning control (ILC) algorithm is developed for an active noise control (ANC) system with a non-minimum phase secondary path. A non-causal transversal finite impulse response (FIR) filter is used as the ILC learning filter, and the impulse response coefficients of the FIR filter are designed according to the asymptotically stable and monotonically convergent criterion in time domain. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating repetitive impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for repetitive impulsive noise attenuation in a non-minimum phase ANC system.

  20. Policy-aware algorithms for proxy placement in the Internet

    Science.gov (United States)

    Kamath, Krishnanand M.; Bassali, Harpal S.; Hosamani, Rajendraprasad B.; Gao, Lixin

    2001-07-01

    Internet has grown explosively for the past few years and has matured into an important commercial infrastructure. The explosive growth of traffic has contributed to degradation of user perceived response times in today's Internet. Caching at the proxy server have emerged as an effective way of reducing the overall latency. The effectiveness of a proxy server is primarily determined by its locality. This locality is affected by factors such as the Internet topology and routing policies. In this paper, we present heuristic algorithms for placing proxies in the Internet by considering both Internet topology and routing policies. In particular, we make use of the logical topology inferred from Autonomous System (AS) relationships to derive the path between a proxy and a client. We present heuristic algorithms for placing proxies and evaluate these algorithms for the Internet logical topology over three years. To the best of our knowledge, this is the first work on placing proxy servers in the Internet that considers logical topology.

  1. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Miltersen, Peter Bro; Zwick, Uri

    2011-01-01

    iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard's policy iteration algorithm used for solving 2-player turn-based stochastic games with discounted zero......-sum rewards. This provides the first strongly polynomial algorithm for solving these games, resolving a long standing open problem....

  2. Convergence properties of iterative algorithms for solving the nodal diffusion equations

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    We drive the five point form of the nodal diffusion equations in two-dimensional Cartesian geometry and develop three iterative schemes to solve the discrete-variable equations: the unaccelerated, partial Successive Over Relaxation (SOR), and the full SOR methods. By decomposing the iteration error into its Fourier modes, we determine the spectral radius of each method for infinite medium, uniform model problems, and for the unaccelerated and partial SOR methods for finite medium, uniform model problems. Also for the two variants of the SOR method we determine the optimal relaxation factor that results in the smallest number of iterations required for convergence. Our results indicate that the number of iterations for the unaccelerated and partial SOR methods is second order in the number of nodes per dimension, while, for the full SOR this behavior is first order, resulting in much faster convergence for very large problems. We successfully verify the results of the spectral analysis against those of numerical experiments, and we show that for the full SOR method the linear dependence of the number of iterations on the number of nodes per dimension is relatively insensitive to the value of the relaxation parameter, and that it remains linear even for heterogenous problems. 14 refs., 1 fig

  3. A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

    OpenAIRE

    Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong

    2013-01-01

    We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...

  4. on the convergence of a new iterative algorithm of three infinite ...

    Indian Academy of Sciences (India)

    20

    feasibility problems for an infinite family of quasi-φ-asymptotically nonexpansive mappings and obtained some strong convergence theorems under suitable conditions in Banach space. In the same year, Bunyawat and Suantai [5] introduced an iterative method for finding a common fixed point of a countable family of ...

  5. Iterative algorithm for solving mixed quasi-variational-like inequalities with skew-symmetric terms in Banach spaces

    Directory of Open Access Journals (Sweden)

    Ansari Qamrul Hasan

    2006-01-01

    Full Text Available We develop an iterative algorithm for computing the approximate solutions of mixed quasi-variational-like inequality problems with skew-symmetric terms in the setting of reflexive Banach spaces. We use Fan-KKM lemma and concept of -cocoercivity of a composition mapping to prove the existence and convergence of approximate solutions to the exact solution of mixed quasi-variational-like inequalities with skew-symmetric terms. Furthermore, we derive the posteriori error estimates for approximate solutions under quite mild conditions.

  6. Finding the Optimal Parameters for Robotic Manipulator Applications of the Bounded Error Algorithm for Iterative Learning Control

    Directory of Open Access Journals (Sweden)

    Yovchev Kaloyan

    2017-12-01

    Full Text Available This paper continues previous research of the Bounded Error Algorithm (BEA for Iterative Learning Control (ILC and its application into the control of robotic manipulators. It focuses on investigation of the influence of the parameters of BEA over the convergence rate of the ILC process. This is performed first through a computer simulation. This simulation suggests optimal values for the parameters. Afterwards, the estimated results are validated on a physical robotic manipulator arm. Also, this is one of the first reports of applying BEA into robots control.

  7. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    Science.gov (United States)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  8. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States)

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  9. Automatic Frequency Identification under Sample Loss in Sinusoidal Pulse Width Modulation Signals Using an Iterative Autocorrelation Algorithm

    Directory of Open Access Journals (Sweden)

    Alejandro Said

    2016-08-01

    Full Text Available In this work, we present a simple algorithm to calculate automatically the Fourier spectrum of a Sinusoidal Pulse Width Modulation Signal (SPWM. Modulated voltage signals of this kind are used in industry by speed drives to vary the speed of alternating current motors while maintaining a smooth torque. Nevertheless, the SPWM technique produces undesired harmonics, which yield stator heating and power losses. By monitoring these signals without human interaction, it is possible to identify the harmonic content of SPWM signals in a fast and continuous manner. The algorithm is based in the autocorrelation function, commonly used in radar and voice signal processing. Taking advantage of the symmetry properties of the autocorrelation, the algorithm is capable of estimating half of the period of the fundamental frequency; thus, allowing one to estimate the necessary number of samples to produce an accurate Fourier spectrum. To deal with the loss of samples, i.e., the scan backlog, the algorithm iteratively acquires and trims the discrete sequence of samples until the required number of samples reaches a stable value. The simulation shows that the algorithm is not affected by either the magnitude of the switching pulses or the acquisition noise.

  10. Direct and iterative algorithms for the parallel solution of the one-dimensional macroscopic Navier-Stokes equations

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1986-01-01

    Current efforts are under way to develop and evaluate numerical algorithms for the parallel solution of the large sparse matrix equations associated with the finite difference representation of the macroscopic Navier-Stokes equations. Previous work has shown that these equations can be cast into smaller coupled matrix equations suitable for solution utilizing multiple computer processors operating in parallel. The individual processors themselves may exhibit parallelism through the use of vector pipelines. This wor, has concentrated on the one-dimensional drift flux form of the Navier-Stokes equations. Direct and iterative algorithms that may be suitable for implementation on parallel computer architectures are evaluated in terms of accuracy and overall execution speed. This work has application to engineering and training simulations, on-line process control systems, and engineering workstations where increased computational speeds are required

  11. A Class Of Iterative Thresholding Algorithms For Real-Time Image Segmentation

    Science.gov (United States)

    Hassan, M. H.

    1989-03-01

    Thresholding algorithms are developed for segmenting gray-level images under nonuniform illumination. The algorithms are based on learning models generated from recursive digital filters which yield to continuously varying threshold tracking functions. A real-time region growing algorithm, which locates the objects in the image while thresholding, is developed and implemented. The algorithms work in a raster-scan format, thus making them attractive for real-time image segmentation in situations requiring fast data throughput such as robot vision and character recognition.

  12. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    Science.gov (United States)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  13. Evaluation of iterative algorithms for tomography image reconstruction: A study using a third generation industrial tomography system

    International Nuclear Information System (INIS)

    Velo, Alexandre F.; Carvalho, Diego V.; Alvarez, Alexandre G.; Hamada, Margarida M.; Mesquita, Carlos H.

    2017-01-01

    The greatest impact of the tomography technology currently occurs in medicine. The success is due to the human body presents standardized dimensions with well-established composition. These conditions are not found in industrial objects. In industry, there is much interest in using the tomography in order to know the inner of (1) the manufactured industrial objects or (2) the machines and their means of production. In these cases, the purpose of the tomography is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and analyzing the quality of the means of production. This scan system is a non-destructive, efficient and fast method for providing sectional images of industrial objects and is able to show the dynamic processes and the dispersion of the materials structures within these objects. In this context, it is important that the reconstructed image presents a great spatial resolution with a satisfactory temporal resolution. Thus the algorithm to reconstruct the images has to meet these requirements. This work consists in the analysis of three different iterative algorithm methods, such Maximum Likelihood Estimation Method (MLEM), Maximum Likelihood Transmitted Method (MLTR) and Simultaneous Iterative Reconstruction Method (SIRT. The analysis consists on measurement of the contrast to noise ratio (CNR), the root mean square error (RMSE) and the Modulation Transfer Function (MTF), to know which algorithm fits better the conditions in order to optimize system. The algorithms and the image quality analysis were performed by the Matlab® 2013b. (author)

  14. A Regularized Approach for Solving Magnetic Differential Equations and a Revised Iterative Equilibrium Algorithm

    International Nuclear Information System (INIS)

    Hudson, S.R.

    2010-01-01

    A method for approximately solving magnetic differential equations is described. The approach is to include a small diffusion term to the equation, which regularizes the linear operator to be inverted. The extra term allows a 'source-correction' term to be defined, which is generally required in order to satisfy the solvability conditions. The approach is described in the context of computing the pressure and parallel currents in the iterative approach for computing magnetohydrodynamic equilibria.

  15. Strong Convergence Iterative Algorithms for Equilibrium Problems and Fixed Point Problems in Banach Spaces

    Directory of Open Access Journals (Sweden)

    Shenghua Wang

    2013-01-01

    Full Text Available We first introduce the concept of Bregman asymptotically quasinonexpansive mappings and prove that the fixed point set of this kind of mappings is closed and convex. Then we construct an iterative scheme to find a common element of the set of solutions of an equilibrium problem and the set of common fixed points of a countable family of Bregman asymptotically quasinonexpansive mappings in reflexive Banach spaces and prove strong convergence theorems. Our results extend the recent ones of some others.

  16. Dynamic Analysis of the High Speed Train and Slab Track Nonlinear Coupling System with the Cross Iteration Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyan Lei

    2016-01-01

    Full Text Available A model for dynamic analysis of the vehicle-track nonlinear coupling system is established by the finite element method. The whole system is divided into two subsystems: the vehicle subsystem and the track subsystem. Coupling of the two subsystems is achieved by equilibrium conditions for wheel-to-rail nonlinear contact forces and geometrical compatibility conditions. To solve the nonlinear dynamics equations for the vehicle-track coupling system, a cross iteration algorithm and a relaxation technique are presented. Examples of vibration analysis of the vehicle and slab track coupling system induced by China’s high speed train CRH3 are given. In the computation, the influences of linear and nonlinear wheel-to-rail contact models and different train speeds are considered. It is found that the cross iteration algorithm and the relaxation technique have the following advantages: simple programming; fast convergence; shorter computation time; and greater accuracy. The analyzed dynamic responses for the vehicle and the track with the wheel-to-rail linear contact model are greater than those with the wheel-to-rail nonlinear contact model, where the increasing range of the displacement and the acceleration is about 10%, and the increasing range of the wheel-to-rail contact force is less than 5%.

  17. Satellite lithium-ion battery remaining useful life estimation with an iterative updated RVM fused with the KF algorithm

    Directory of Open Access Journals (Sweden)

    Yuchen SONG

    2018-01-01

    Full Text Available Lithium-ion batteries have become the third-generation space batteries and are widely utilized in a series of spacecraft. Remaining Useful Life (RUL estimation is essential to a spacecraft as the battery is a critical part and determines the lifetime and reliability. The Relevance Vector Machine (RVM is a data-driven algorithm used to estimate a battery’s RUL due to its sparse feature and uncertainty management capability. Especially, some of the regressive cases indicate that the RVM can obtain a better short-term prediction performance rather than long-term prediction. As a nonlinear kernel learning algorithm, the coefficient matrix and relevance vectors are fixed once the RVM training is conducted. Moreover, the RVM can be simply influenced by the noise with the training data. Thus, this work proposes an iterative updated approach to improve the long-term prediction performance for a battery’s RUL prediction. Firstly, when a new estimator is output by the RVM, the Kalman filter is applied to optimize this estimator with a physical degradation model. Then, this optimized estimator is added into the training set as an on-line sample, the RVM model is re-trained, and the coefficient matrix and relevance vectors can be dynamically adjusted to make next iterative prediction. Experimental results with a commercial battery test data set and a satellite battery data set both indicate that the proposed method can achieve a better performance for RUL estimation.

  18. Iterative circle-inserting algorithm CST3D-OC of truly orthogonal curvilinear grid for coastal or river modelling

    Science.gov (United States)

    Kim, H.; Lee, S.; Lee, J.; Lim, H.-S.

    2017-08-01

    A geometric method to generate orthogonal curvilinear grid is proposed here. Elliptic partial differential equations have frequently been solved to find orthogonal grid positions, but questions on orthogonality have remained so far. Algebraic methods have also been developed to improve orthogonality, but their applications have been limited to special situations. When two confronting boundary lines of the quadrilateral boundaries are straight, and their positions are known, and we assume that some degree of freedom exists on the other two confronting boundary curves under the condition that the each curve passes through a point, we can assign a set of latitudinal curves in the domain using polynomials. The curves are expected not to fold on their own. The grid positions along longitudinal curves are found by inserting circles between two neighbouring latitudinal curves one by one. If the two curves are straight, the new grid point above the grid point of interest can be found geometrically. This algorithm involves iterations because the curves are not straight lines. The present new algorithm is applied to a domain, and produced almost perfect orthogonality, and similar aspect ratio compared to an existing partial differential equation approach. The present algorithm also can express almost quadrant domain. The present algorithm seems useful for generation of orthogonal curvilinear grids along coasts or rivers. Some example grids are demonstrated.

  19. An iterative reconstruction algorithm in cone beam geometry: simulation and application in X-ray microtomography

    International Nuclear Information System (INIS)

    Zolfaghari, A.

    1996-01-01

    An X-ray microtomograph has been built in our laboratory from a conventional scanning electron microscope. An algorithm based on algebraic reconstruction techniques (ART) has been developed to reconstruct the internal structure of the imaged object. This algorithm takes into account the diverging nature of the X-ray beam. In this paper we explain the reconstruction algorithm and we analyse the quality of the reconstructed objects in terms of signal-to-noise ratio (SNR), spatial resolution and cross-entropy. (orig.)

  20. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    Energy Technology Data Exchange (ETDEWEB)

    Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)

    2015-01-15

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  1. [In-situ monitoring algorithm of gases poisonous elements concentration with ultraviolet optical absorption spectroscopy based on recursion iterative method].

    Science.gov (United States)

    Wang, Hui-feng; Jiang, Xu-qian

    2012-01-01

    The key and challenge problem of in-situ monitoring poisonous elements of gases is how to separate the various gases absorption signal from mixed gases absorption spectroscopy and compute it's accuracy concentration? Here we present a new algorithms in return recursion iteration based on Lambert-Beer principle. In the algorithms, recurred by the character of absorption peak of various gases in the band of 190-290 nm UV rays continuous spectroscopy and the character of twin element fold for absorbance are used. Firstly, the authors suppose that there is no absorption for others gases in the character absorption band for a certain gas, the authors can inference the initial concentration of the gas. Then the authors switch to another character spectroscopy, and put the photons that gases absorption out of the total number of absorbed photons that are measured. So we could get the initial concentration of another gas. By analogy the authros can get the initial concentration of all kinds of other poisonous elements. Then come back to the character spectroscopy of the first gas, the authors can get a new concentration of the first gas from the difference between the total number of absorbed photons and the photons that other gases absorption. By analogy the authors can get the iterative concentration of other gases, by irterating this process repeatly for some times until the measurement error of the adjacent gas concentration is smaller than a certain numerical value. Finally the authors can get the real and accurate concentration of all kinds of gases. Experiment shows that the authors can get the accurate concentration of all kinds of gases with the algorithm. The accuracy can be within 2%, and at the same time, it is easy enough to satisfy the necessity of real-time requirement. In addition it could be used to measure the concentration of many kinds of gas at a time. It is robust and suitable to be taken into practice.

  2. Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm

    Science.gov (United States)

    Smith, J. Scott

    2011-01-01

    Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.

  3. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  4. A iterative algorithm in computarized tomography applied to non-destructive testing

    International Nuclear Information System (INIS)

    Santos, C.A.C.

    1982-10-01

    In the present work, a mathematical model has been developed for two dimensional image reconstruction in computarized tomography applied to non-destructive testing. The method used is the Algebraic Reconstruction Technique (ART) with additive corrections. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object of a collimated beam of gamma rays has been determined for various positions and angles of incidence (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function wij. Simulated tests using standard objects carried out with attenuation coefficients in the range 0,2 to 0,7 cm -1 , were made using cell arrays of up to 25x25. Experiments were made using a gamma radiation source ( 241 Am), a table with translational and rotational movements and a gamma radiation detection system. Results indicate that convergence obtained in the iterative calculations is a function of the distribution of attenuation coefficient in the pixels, of the number of angular projection and of the number of iterations. (author) [pt

  5. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  6. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    Science.gov (United States)

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John

    2016-03-08

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  7. Iterative metal artefact reduction in CT: can dedicated algorithms improve image quality after spinal instrumentation?

    Science.gov (United States)

    Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J

    2017-05-01

    To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (palgorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; palgorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  8. An Iterative Algorithm for the Management of an Electric Car-Rental Service

    Directory of Open Access Journals (Sweden)

    J. Alberto Conejero

    2014-01-01

    Full Text Available The management of a car-rental service becomes more complex as long as one-way bookings between different depots are accepted. These bookings can increase the operational costs due to the necessity of moving vehicles from one depot to another by the company staff in order to attend previously accepted bookings. We present an iterative model based on flows on networks for the acceptance of bookings by a car-rental service that permits one-way reservations. Our model lets us also recover the movement of the fleet of vehicles between the depots over the time. In addition, it also permits including restrictions on the amount of cars managed at every single depot. These results can be of interest for an electric car-rental service that operates at different depots within a city or region.

  9. Iterated local search algorithm for solving the orienteering problem with soft time windows.

    Science.gov (United States)

    Aghezzaf, Brahim; Fahim, Hassan El

    2016-01-01

    In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.

  10. BAKTRAK: backtracking drifting objects using an iterative algorithm with a forward trajectory model

    Science.gov (United States)

    Breivik, Øyvind; Bekkvik, Tor Christian; Wettre, Cecilie; Ommundsen, Atle

    2012-02-01

    The task of determining the origin of a drifting object after it has been located is highly complex due to the uncertainties in drift properties and environmental forcing (wind, waves, and surface currents). Usually, the origin is inferred by running a trajectory model (stochastic or deterministic) in reverse. However, this approach has some severe drawbacks, most notably the fact that many drifting objects go through nonlinear state changes underway (e.g., evaporating oil or a capsizing lifeboat). This makes it difficult to naively construct a reverse-time trajectory model which realistically predicts the earliest possible time the object may have started drifting. We propose instead a different approach where the original (forward) trajectory model is kept unaltered while an iterative seeding and selection process allows us to retain only those particles that end up within a certain time-space radius of the observation. An iterative refinement process named BAKTRAK is employed where those trajectories that do not make it to the goal are rejected, and new trajectories are spawned from successful trajectories. This allows the model to be run in the forward direction to determine the point of origin of a drifting object. The method is demonstrated using the leeway stochastic trajectory model for drifting objects due to its relative simplicity and the practical importance of being able to identify the origin of drifting objects. However, the methodology is general and even more applicable to oil drift trajectories, drifting ships, and hazardous material that exhibit nonlinear state changes such as evaporation, chemical weathering, capsizing, or swamping. The backtracking method is tested against the drift trajectory of a life raft and is shown to predict closely the initial release position of the raft and its subsequent trajectory.

  11. Higher order explicit solutions for nonlinear dynamic model of column buckling using variational approach and variational iteration algorithm-II

    Energy Technology Data Exchange (ETDEWEB)

    Bagheri, Saman; Nikkar, Ali [University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2014-11-15

    This paper deals with the determination of approximate solutions for a model of column buckling using two efficient and powerful methods called He's variational approach and variational iteration algorithm-II. These methods are used to find analytical approximate solution of nonlinear dynamic equation of a model for the column buckling. First and second order approximate solutions of the equation of the system are achieved. To validate the solutions, the analytical results have been compared with those resulted from Runge-Kutta 4th order method. A good agreement of the approximate frequencies and periodic solutions with the numerical results and the exact solution shows that the present methods can be easily extended to other nonlinear oscillation problems in engineering. The accuracy and convenience of the proposed methods are also revealed in comparisons with the other solution techniques.

  12. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  13. Iteratively Reweighted Least Squares Algorithm for Sparse Principal Component Analysis with Application to Voting Records

    Directory of Open Access Journals (Sweden)

    Tomáš Masák

    2017-09-01

    Full Text Available Principal component analysis (PCA is a popular dimensionality reduction and data visualization method. Sparse PCA (SPCA is its extensively studied and NP-hard-to-solve modifcation. In the past decade, many diferent algorithms were proposed to perform SPCA. We build upon the work of Zou et al. (2006 who recast the SPCA problem into the regression framework and proposed to induce sparsity with the l1 penalty. Instead, we propose to drop the l1 penalty and promote sparsity by re-weighting the l2-norm. Our algorithm thus consists mainly of solving weighted ridge regression problems. We show that the algorithm basically attempts to fnd a solution to a penalized least squares problem with a non-convex penalty that resembles the l0-norm more closely. We also apply the algorithm to analyze the voting records of the Chamber of Deputies of the Parliament of the Czech Republic. We show not only why the SPCA is more appropriate to analyze this type of data, but we also discuss whether the variable selection property can be utilized as an additional piece of information, for example to create voting calculators automatically.

  14. Optimal reservoir operation policies using novel nested algorithms

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.

  15. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    Science.gov (United States)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  16. Multivariate systems of nonexpansive operator equations and iterative algorithms for solving them in uniformly convex and uniformly smooth Banach spaces with applications.

    Science.gov (United States)

    Xu, Yongchun; Guan, Jinyu; Tang, Yanxia; Su, Yongfu

    2018-01-01

    We prove some existence theorems for solutions of a certain system of multivariate nonexpansive operator equations and calculate the solutions by using the generalized Mann and Halpern iterative algorithms in uniformly convex and uniformly smooth Banach spaces. The results of this paper improve and extend the previously known ones in the literature.

  17. Using the Iterative Input variable Selection (IIS) algorithm to assess the relevance of ENSO teleconnections patterns on hydro-meteorological processes at the catchment scale

    Science.gov (United States)

    Beltrame, Ludovica; Carbonin, Daniele; Galelli, Stefano; Castelletti, Andrea

    2014-05-01

    Population growth, water scarcity and climate change are three major factors making the understanding of variations in water availability increasingly important. Therefore, reliable medium-to-long range forecasts of streamflows are essential to the development of water management policies. To this purpose, recent modelling efforts have been dedicated to seasonal and inter-annual streamflow forecasts based on the teleconnection between "at-site" hydro-meteorological processes and low frequency climate fluctuations, such as El Niño Southern Oscillation (ENSO). This work proposes a novel procedure for first detecting the impact of ENSO on hydro-meteorological processes at the catchment scale, and then assessing the potential of ENSO indicators for building medium-to-long range statistical streamflow prediction models. Core of this procedure is the adoption of the Iterative Input variable Selection (IIS) algorithm that is employed to find the most relevant forcings of streamflow variability and derive predictive models based on the selected inputs. The procedure is tested on the Columbia (USA) and Williams (Australia) Rivers, where ENSO influence has been well-documented, and then adopted on the unexplored Red River basin (Vietnam). Results show that IIS outcomes on the Columbia and Williams Rivers are consistent with the results of previous studies, and that ENSO indicators can be effectively used to enhance the streamflow forecast models capabilities. The experiments on the Red River basin show that the ENSO influence is less pronounced, inducing little effects on the basin hydro-meteorological processes.

  18. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  19. Analysis of an iterated local search algorithm for vertex cover in sparse random graphs

    DEFF Research Database (Denmark)

    Witt, Carsten

    2012-01-01

    based on the refined analysis of the Karp–Sipser algorithm by Aronson et al. (1998) [1]. Subsequently, theoretical supplements are given to experimental studies of search heuristics on random graphs. For csearch heuristic...... finds an optimal cover in polynomial time with a probability arbitrarily close to 1. This behavior relies on the absence of a giant component. As an additional insight into the randomized search, it is shown that the heuristic fails badly also on graphs consisting of a single tree component of maximum......Recently, various randomized search heuristics have been studied for the solution of the minimum vertex cover problem, in particular for sparse random instances according to the G(n,c/n) model, where c>0 is a constant. Methods from statistical physics suggest that the problem is easy if c...

  20. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    International Nuclear Information System (INIS)

    Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-01-01

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  1. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers.

    Science.gov (United States)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A; Sainath, Paavana; Hsieh, Jiang

    2014-07-01

    A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors' task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%-67% (head phantom) and 68%-82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%-75% for head phantom and 67%-77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the model observers using the IR images was always higher

  2. Characterization of a commercial hybrid iterative and model-based reconstruction algorithm in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Price, Ryan G. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 and Wayne State University School of Medicine, Detroit, Michigan 48201 (United States); Vance, Sean; Cattaneo, Richard; Elshaikh, Mohamed A.; Chetty, Indrin J.; Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States); Schultz, Lonni [Department of Public Health Sciences, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)

    2014-08-15

    Purpose: Iterative reconstruction (IR) reduces noise, thereby allowing dose reduction in computed tomography (CT) while maintaining comparable image quality to filtered back-projection (FBP). This study sought to characterize image quality metrics, delineation, dosimetric assessment, and other aspects necessary to integrate IR into treatment planning. Methods: CT images (Brilliance Big Bore v3.6, Philips Healthcare) were acquired of several phantoms using 120 kVp and 25–800 mAs. IR was applied at levels corresponding to noise reduction of 0.89–0.55 with respect to FBP. Noise power spectrum (NPS) analysis was used to characterize noise magnitude and texture. CT to electron density (CT-ED) curves were generated over all IR levels. Uniformity as well as spatial and low contrast resolution were quantified using a CATPHAN phantom. Task specific modulation transfer functions (MTF{sub task}) were developed to characterize spatial frequency across objects of varied contrast. A prospective dose reduction study was conducted for 14 patients undergoing interfraction CT scans for high-dose rate brachytherapy. Three physicians performed image quality assessment using a six-point grading scale between the normal-dose FBP (reference), low-dose FBP, and low-dose IR scans for the following metrics: image noise, detectability of the vaginal cuff/bladder interface, spatial resolution, texture, segmentation confidence, and overall image quality. Contouring differences between FBP and IR were quantified for the bladder and rectum via overlap indices (OI) and Dice similarity coefficients (DSC). Line profile and region of interest analyses quantified noise and boundary changes. For two subjects, the impact of IR on external beam dose calculation was assessed via gamma analysis and changes in digitally reconstructed radiographs (DRRs) were quantified. Results: NPS showed large reduction in noise magnitude (50%), and a slight spatial frequency shift (∼0.1 mm{sup −1}) with

  3. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    Science.gov (United States)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  4. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    International Nuclear Information System (INIS)

    Barras, Heloise; Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique

  5. Exploring the velocity distribution of debris flows: An iteration algorithm based approach for complex cross-sections

    Science.gov (United States)

    Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong

    2015-07-01

    The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.

  6. Iterative sure independence screening EM-Bayesian LASSO algorithm for multi-locus genome-wide association studies

    Science.gov (United States)

    Tamba, Cox Lwaka; Ni, Yuan-Li; Zhang, Yuan-Ming

    2017-01-01

    Genome-wide association study (GWAS) entails examining a large number of single nucleotide polymorphisms (SNPs) in a limited sample with hundreds of individuals, implying a variable selection problem in the high dimensional dataset. Although many single-locus GWAS approaches under polygenic background and population structure controls have been widely used, some significant loci fail to be detected. In this study, we used an iterative modified-sure independence screening (ISIS) approach in reducing the number of SNPs to a moderate size. Expectation-Maximization (EM)-Bayesian least absolute shrinkage and selection operator (BLASSO) was used to estimate all the selected SNP effects for true quantitative trait nucleotide (QTN) detection. This method is referred to as ISIS EM-BLASSO algorithm. Monte Carlo simulation studies validated the new method, which has the highest empirical power in QTN detection and the highest accuracy in QTN effect estimation, and it is the fastest, as compared with efficient mixed-model association (EMMA), smoothly clipped absolute deviation (SCAD), fixed and random model circulating probability unification (FarmCPU), and multi-locus random-SNP-effect mixed linear model (mrMLM). To further demonstrate the new method, six flowering time traits in Arabidopsis thaliana were re-analyzed by four methods (New method, EMMA, FarmCPU, and mrMLM). As a result, the new method identified most previously reported genes. Therefore, the new method is a good alternative for multi-locus GWAS. PMID:28141824

  7. A rotating and warping projector/backprojector for fan-beam and cone-beam iterative algorithm

    International Nuclear Information System (INIS)

    Zeng, G.L.; Hsieh, Y.L.; Gullberg, G.T.

    1994-01-01

    A rotating-and-warping projector/backprojector is proposed for iterative algorithms used to reconstruct fan-beam and cone-beam single photon emission computed tomography (SPECT) data. The development of a new projector/backprojector for implementing attenuation, geometric point response, and scatter models is motivated by the need to reduce the computation time yet to preserve the fidelity of the corrected reconstruction. At each projection angle, the projector/backprojector first rotates the image volume so that the pixelized cube remains parallel to the detector, and then warps the image volume so that the fan-beam and cone-beam rays are converted into parallel rays. In the authors implementation, these two steps are combined so that the interpolation of voxel values are performed only once. The projection operation is achieved by a simple weighted summation, and the backprojection operation is achieved by copying weighted projection array values to the image volume. An advantage of this projector/backprojector is that the system point response function can be deconvolved via the Fast Fourier Transform using the shift-invariant property of the point response when the voxel-to-detector distance is constant. The fan-beam and cone-beam rotating-and-warping projector/backprojector is applied to SPECT data showing improved resolution

  8. Change Detection of Phragmites Australis Distribution in the Detroit Wildlife Refuge Based on an Iterative Intersection Analysis Algorithm

    Directory of Open Access Journals (Sweden)

    Haixin Liu

    2016-03-01

    Full Text Available Satellite data have been widely used in the detection of vegetation area changes, however, the lack of historical training samples seriously limits detection accuracy. In this research, an iterative intersection analysis algorithm (IIAA is proposed to solve this problem, and employed to improve the change detection accuracy of Phragmites area in the Detroit River International Wildlife Refuge between 2001 and 2010. Training samples for 2001, 2005, and 2010 were constructed based on NAIP, DOQQ high-resolution imagery and ground-truth data; for 2002–2004 and 2006–2009, because of the shortage of training samples, the IIAA was employed to supply additional training samples. This method included three steps: first, the NDVI image for each year (2002–2004, 2006–2009 was calculated with Landsat TM images; secondly, rough patches of the land-cover were acquired by density slicing using suitable thresholds; thirdly, a GIS overlay analysis method was used to acquire the Phragmites information in common throughout the ten years and to obtain training patches. In the combination with training samples of other land cover types, supervised classifications were employed to detect the changes of Phragmites area. In the experiment, we analyzed the variation of Phragmites area from 2001 to 2010, and the result showed that its distribution areas increased from 5156 acres to 6817 acres during this period, which illustrated that the invasion of Phragmites remains a serious problem for the protection of biodiversity.

  9. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim

    2011-11-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  10. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  11. The impact of CT radiation dose reduction and iterative reconstruction algorithms from four different vendors on coronary calcium scoring

    Energy Technology Data Exchange (ETDEWEB)

    Willemink, Martin J.; Takx, Richard A.P.; Jong, Pim A. de; Budde, Ricardo P.J.; Schilham, Arnold M.R.; Leiner, Tim [Utrecht University Medical Center, Department of Radiology, Utrecht (Netherlands); Bleys, Ronald L.A.W. [Utrecht University Medical Center, Department of Anatomy, Utrecht (Netherlands); Das, Marco; Wildberger, Joachim E. [Maastricht University Medical Center, Department of Radiology, Maastricht (Netherlands); Prokop, Mathias [Radboud University Nijmegen Medical Center, Department of Radiology, Nijmegen (Netherlands); Buls, Nico; Mey, Johan de [UZ Brussel, Department of Radiology, Brussels (Belgium)

    2014-09-15

    To analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS). Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels using unenhanced prospectively ECG-triggered protocols. Tube voltage was 120 kV and tube current differed between protocols. CT data were reconstructed with filtered back projection (FBP) and reduced dose CT data with IR. CCS was quantified with Agatston scores, calcification mass and calcification volume. Differences were analysed with the Friedman test. Fourteen hearts showed coronary calcifications. Dose reduction with FBP did not significantly change Agatston scores, calcification volumes and calcification masses (P > 0.05). Maximum differences in Agatston scores were 76, 26, 51 and 161 units, in calcification volume 97, 27, 42 and 162 mm{sup 3}, and in calcification mass 23, 23, 20 and 48 mg, respectively. IR resulted in a trend towards lower Agatston scores and calcification volumes with significant differences for one vendor (P < 0.05). Median relative differences between reference FBP and reduced dose IR for Agatston scores remained within 2.0-4.6 %, 1.0-5.3 %, 1.2-7.7 % and 2.6-4.5 %, for calcification volumes within 2.4-3.9 %, 1.0-5.6 %, 1.1-6.4 % and 3.7-4.7 %, for calcification masses within 1.9-4.1 %, 0.9-7.8 %, 2.9-4.7 % and 2.5-3.9 %, respectively. IR resulted in increased, decreased or similar calcification masses. CCS derived from standard FBP acquisitions was not affected by radiation dose reductions up to 80 %. IR resulted in a trend towards lower Agatston scores and calcification volumes. (orig.)

  12. Evaluation of hybrid SART  +  OS  +  TV iterative reconstruction algorithm for optical-CT gel dosimeter imaging

    Science.gov (United States)

    Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping

    2016-12-01

    Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART  +  OS  +  TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART  +  OS  +  TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART  +  OS  +  TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART  +  OS  +  TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.

  13. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  14. Accelerated parabolic Radon domain 2D adaptive multiple subtraction with fast iterative shrinkage thresholding algorithm and its application in parabolic Radon domain hybrid demultiple method

    Science.gov (United States)

    Li, Zhong-xiao; Li, Zhen-chun

    2017-08-01

    Adaptive multiple subtraction is an important step for successfully conducting surface-related multiple elimination in marine seismic exploration. 2D adaptive multiple subtraction conducted in the parabolic Radon domain has been proposed to better separate primaries and multiples than 2D adaptive multiple subtraction conducted in the time-offset domain. Additionally, the parabolic Radon domain hybrid demultiple method combining parabolic Radon filtering and parabolic Radon domain 2D adaptive multiple subtraction can better remove multiples than the cascaded demultiple method using time-offset domain 2D adaptive multiple subtraction and the parabolic Radon transform method sequentially. To solve the matching filter in the optimization problem with L1 norm minimization constraint of primaries, traditional parabolic Radon domain 2D adaptive multiple subtraction uses the iterative reweighted least squares (IRLS) algorithm, which is computationally expensive for solving a weighted LS inversion in each iteration. In this paper we introduce the fast iterative shrinkage thresholding algorithm (FISTA) as a faster alternative to the IRLS algorithm for parabolic Radon domain 2D adaptive multiple subtraction. FISTA uses the shrinkage-thresholding operator to promote the sparsity of estimated primaries and solves the 2D matching filter with iterative steps. FISTA based parabolic Radon domain 2D adaptive multiple subtraction reduces the computation time effectively while achieving similar accuracy compared with IRLS based parabolic Radon domain 2D adaptive multiple subtraction. Additionally, the provided examples show that FISTA based parabolic Radon domain 2D adaptive multiple subtraction can better separate primaries and multiples than FISTA based time-offset domain 2D adaptive multiple subtraction. Furthermore, we introduce FISTA based parabolic Radon domain 2D adaptive multiple subtraction into the parabolic Radon domain hybrid demultiple method to improve its computation

  15. SU-F-P-45: Clinical Experience with Radiation Dose Reduction of CT Examinations Using Iterative Reconstruction Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Weir, V [Baylor Scott and White Healthcare System, Dallas, TX (United States); Zhang, J [University of Kentucky, Lexington, KY (United States)

    2016-06-15

    Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash in one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.

  16. ITER...ation

    International Nuclear Information System (INIS)

    Troyon, F.

    1997-01-01

    Recurrent attacks against ITER, the new generation of tokamak are a mix of political and scientific arguments. This short article draws a historical review of the European fusion program. This program has allowed to build and manage several installations in the aim of getting experimental results necessary to lead the program forwards. ITER will bring together a fusion reactor core with technologies such as materials, superconductive coils, heating devices and instrumentation in order to validate and delimit the operating range. ITER will be a logical and decisive step towards the use of controlled fusion. (A.C.)

  17. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  18. Issues in developing parallel iterative algorithms for solving partial differential equations on a (transputer-based) distributed parallel computing system

    International Nuclear Information System (INIS)

    Rajagopalan, S.; Jethra, A.; Khare, A.N.; Ghodgaonkar, M.D.; Srivenkateshan, R.; Menon, S.V.G.

    1990-01-01

    Issues relating to implementing iterative procedures, for numerical solution of elliptic partial differential equations, on a distributed parallel computing system are discussed. Preliminary investigations show that a speed-up of about 3.85 is achievable on a four transputer pipeline network. (author). 2 figs., 3 a ppendixes., 7 refs

  19. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  20. Iterative algorithm based on a combination of vector similarity measure and B-spline functions for particle analysis in forward scattering

    Science.gov (United States)

    Wang, Tian'en; Shen, Jianqi; Lin, Chengjun

    2017-06-01

    The vector similarity measure (VSM) was recently introduced into the inverse problem for particle analysis based on forward light scattering and its modified version was proposed to adapt for multi-modal particle systems. It is found that the algorithm is stable and efficient but the extracted solutions are usually oscillatory, especially for widely distributed particle systems. In order to improve this situation, an iterative VSM method combined with cubic B-spline functions (B-VSM) is presented. Simulations and experiments show that, compared with the old versions, this modification is more robust and efficient.

  1. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    Science.gov (United States)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  2. Submillisievert Computed Tomography of the Chest Using Model-Based Iterative Algorithm: Optimization of Tube Voltage With Regard to Patient Size.

    Science.gov (United States)

    Deák, Zsuzsanna; Maertz, Friedrich; Meurer, Felix; Notohamiprodjo, Susan; Mueck, Fabian; Geyer, Lucas L; Reiser, Maximilian F; Wirth, Stefan

    The aim of this study was to define optimal tube potential for soft tissue and vessel visualization in dose-reduced chest CT protocols using model-based iterative algorithm in average and overweight patients. Thirty-six patients receiving chest CT according to 3 protocols (120 kVp/noise index [NI], 60; 100 kVp/NI, 65; 80 kVp/NI, 70) were included in this prospective study, approved by the ethics committee. Patients' physical parameters and dose descriptors were recorded. Images were reconstructed with model-based algorithm. Two radiologists evaluated image quality and lesion conspicuity; the protocols were intraindividually compared with preceding control CT reconstructed with statistical algorithm (120 kVp/NI, 20). Mean and standard deviation of attenuation of the muscle and fat tissues and signal-to-noise ratio of the aorta were measured. Diagnostic images (lesion conspicuity, 95%-100%) were acquired in average and overweight patients at 1.34, 1.02, and 1.08 mGy and at 3.41, 3.20, and 2.88 mGy at 120, 100, and 80 kVp, respectively. Data are given as CT dose index volume values. Model-based algorithm allows for submillisievert chest CT in average patients; the use of 100 kVp is recommended.

  3. A flexibility-based method via the iterated improved reduction system and the cuckoo optimization algorithm for damage quantification with limited sensors

    International Nuclear Information System (INIS)

    Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Bagheri, Abdollah; Koo, Ki-Young

    2014-01-01

    In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)

  4. Diagnostic value of fourth-generation iterative reconstruction algorithm with low-dose CT protocol in assessment of mesorectal fascia invasion in rectal cancer: comparison with magnetic resonance.

    Science.gov (United States)

    Ippolito, Davide; Drago, Silvia Girolama; Talei Franzesi, C R; Casiraghi, Alessandra; Sironi, Sandro

    2017-09-01

    The purpose of the article is to compare the diagnostic performance about radiation dose and image quality of low-dose CT with iterative reconstruction algorithm (iDose4) and standard-dose CT in the assessment of mesorectal fascia (MRF) invasion in rectal cancer patients. Ninety-one patients with biopsy-proven primary rectal adenocarcinoma underwent CT staging: 42 underwent low-dose CT, 49 underwent standard CT protocol. Low-dose contrast-enhanced MDCT scans were performed on a 256 (ICT, Philips) scanner using 120 kV, automated mAs modulation, iDose4 iterative reconstruction algorithm. Standard-dose MDCT scans were performed on the same scanner with 120 kV, 200-300 mAs. All patients underwent a standard lower abdomen MR study (on 1.5T magnet), including multiplanar sequences, considered as reference standard. Diagnostic accuracy of MRF assessment was determined on CT images for both CT protocols and compared with MRI images. Dose-length product (DLP) and CT dose index (CTDI) calculated for both groups were compared and statistically analyzed. Low-dose protocol with iDose4 showed high diagnostic quality in assessment of MRF with significant reduction (23%; p = 0.0081) of radiation dose (DLP 2453.47) compared to standard-dose examination (DLP 3194.32). Low-dose protocol combined with iDose4 reconstruction algorithm offers high-quality images, obtaining significant radiation dose reduction, useful in the evaluation of MRF involvement in rectal cancer patients.

  5. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials

    Science.gov (United States)

    Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram

    2018-02-01

    Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.

  6. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    Science.gov (United States)

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  7. Lower Bounds for Howard's Algorithm for Finding Minimum Mean-Cost Cycles

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Zwick, Uri

    2010-01-01

    Howard’s policy iteration algorithm is one of the most widely used algorithms for finding optimal policies for controlling Markov Decision Processes (MDPs). When applied to weighted directed graphs, which may be viewed as Deterministic MDPs (DMDPs), Howard’s algorithm can be used to find Minimum...

  8. iHadoop: Asynchronous Iterations Support for MapReduce

    KAUST Repository

    Elnikety, Eslam

    2011-08-01

    MapReduce is a distributed programming framework designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter- iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This thesis also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  9. Improvements of the Penalty Avoiding Rational Policy Making Algorithm and an Application to the Othello Game

    Science.gov (United States)

    Miyazaki, Kazuteru; Tsuboi, Sougo; Kobayashi, Shigenobu

    The purpose of reinforcement learning is to learn an optimal policy in general. However, in 2-players games such as the othello game, it is important to acquire a penalty avoiding policy. In this paper, we focus on formation of a penalty avoiding policy based on the Penalty Avoiding Rational Policy Making algorithm [Miyazaki 01]. In applying it to large-scale problems, we are confronted with the curse of dimensionality. We introduce several ideas and heuristics to overcome the combinational explosion in large-scale problems. First, we propose an algorithm to save the memory by calculation of state transition. Second, we describe how to restrict exploration by two type knowledge; KIFU database and evaluation funcion. We show that our learning player can always defeat against the well-known othello game program KITTY.

  10. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)

    2017-08-15

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)

  11. Diagnostic performance of reduced-dose CT with a hybrid iterative reconstruction algorithm for the detection of hypervascular liver lesions: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Nakamoto, Atsushi; Tanaka, Yoshikazu; Juri, Hiroshi; Nakai, Go; Narumi, Yoshifumi [Osaka Medical College, Department of Radiology, Takatsuki, Osaka (Japan); Yoshikawa, Shushi [Osaka Medical College Hospital, Central Radiology Department, Takatsuki, Osaka (Japan)

    2017-07-15

    To investigate the diagnostic performance of reduced-dose CT with a hybrid iterative reconstruction (IR) algorithm for the detection of hypervascular liver lesions. Thirty liver phantoms with or without simulated hypervascular lesions were scanned with a 320-slice CT scanner with control-dose (40 mAs) and reduced-dose (30 and 20 mAs) settings. Control-dose images were reconstructed with filtered back projection (FBP), and reduced-dose images were reconstructed with FBP and a hybrid IR algorithm. Objective image noise and the lesion to liver contrast-to-noise ratio (CNR) were evaluated quantitatively. Images were interpreted independently by 2 blinded radiologists, and jackknife alternative free-response receiver-operating characteristic (JAFROC) analysis was performed. Hybrid IR images with reduced-dose settings (both 30 and 20 mAs) yielded significantly lower objective image noise and higher CNR than control-dose FBP images (P <.05). However, hybrid IR images with reduced-dose settings had lower JAFROC1 figure of merit than control-dose FBP images, although only the difference between 20 mAs images and control-dose FBP images was significant for both readers (P <.01). An aggressive reduction of the radiation dose would impair the detectability of hypervascular liver lesions, although objective image noise and CNR would be preserved by a hybrid IR algorithm. (orig.)

  12. EM Algorithm and Stochastic Control in Economics

    OpenAIRE

    Kou, Steven; Peng, Xianhua; Xu, Xingbo

    2016-01-01

    Generalising the idea of the classical EM algorithm that is widely used for computing maximum likelihood estimates, we propose an EM-Control (EM-C) algorithm for solving multi-period finite time horizon stochastic control problems. The new algorithm sequentially updates the control policies in each time period using Monte Carlo simulation in a forward-backward manner; in other words, the algorithm goes forward in simulation and backward in optimization in each iteration. Similar to the EM alg...

  13. Strategy Iteration Is Strongly Polynomial for 2-Player Turn-Based Stochastic Games with a Constant Discount Factor

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Miltersen, Peter Bro; Zwick, Uri

    2013-01-01

    -based stochastic games with discounted zero-sum rewards. This provides the first strongly polynomial algorithm for solving these games, solving a long standing open problem. Combined with other recent results, this provides a complete characterization of the complexity the standard strategy iteration algorithm...... terminates after at most O(m1−γ log n1−γ) iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard’s policy iteration algorithm used for solving 2-player turn...... for 2-player turn-based stochastic games; it is strongly polynomial for a fixed discount factor, and exponential otherwise....

  14. An integrated DEA-COLS-SFA algorithm for optimization and policy making of electricity distribution units

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.; Eivazy, H.

    2009-01-01

    This paper presents an integrated data envelopment analysis (DEA)-corrected ordinary least squares (COLS)-stochastic frontier analysis (SFA)-principal component analysis (PCA)-numerical taxonomy (NT) algorithm for performance assessment, optimization and policy making of electricity distribution units. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study proposes an integrated flexible approach to measure the rank and choose the best version of the DEA method for optimization and policy making purposes. It covers both static and dynamic aspects of information environment due to involvement of SFA which is finally compared with the best DEA model through the Spearman correlation technique. The integrated approach would yield in improved ranking and optimization of electricity distribution systems. To illustrate the usability and reliability of the proposed algorithm, 38 electricity distribution units in Iran have been considered, ranked and optimized by the proposed algorithm of this study.

  15. Asymmetric double-image encryption method by using iterative phase retrieval algorithm in fractional Fourier transform domain

    Science.gov (United States)

    Sui, Liansheng; Lu, Haiwei; Ning, Xiaojuan; Wang, Yinghui

    2014-02-01

    A double-image encryption scheme is proposed based on an asymmetric technique, in which the encryption and decryption processes are different and the encryption keys are not identical to the decryption ones. First, a phase-only function (POF) of each plain image is retrieved by using an iterative process and then encoded into an interim matrix. Two interim matrices are directly modulated into a complex image by using the convolution operation in the fractional Fourier transform (FrFT) domain. Second, the complex image is encrypted into the gray scale ciphertext with stationary white-noise distribution by using the FrFT. In the encryption process, three random phase functions are used as encryption keys to retrieve the POFs of plain images. Simultaneously, two decryption keys are generated in the encryption process, which make the optical implementation of the decryption process convenient and efficient. The proposed encryption scheme has high robustness to various attacks, such as brute-force attack, known plaintext attack, cipher-only attack, and specific attack. Numerical simulations demonstrate the validity and security of the proposed method.

  16. A hybrid approach based on logistic classification and iterative contrast enhancement algorithm for hyperintense multiple sclerosis lesion segmentation.

    Science.gov (United States)

    da Silva Senra Filho, Antonio Carlos

    2017-11-18

    Multiple sclerosis (MS) is a neurodegenerative disease with increasing importance in recent years, in which the T2 weighted with fluid attenuation inversion recovery (FLAIR) MRI imaging technique has been addressed for the hyperintense MS lesion assessment. Many automatic lesion segmentation approaches have been proposed in the literature in order to assist health professionals. In this study, a new hybrid lesion segmentation approach based on logistic classification (LC) and the iterative contrast enhancement (ICE) method is proposed (LC+ICE). T1 and FLAIR MRI images from 32 secondary progressive MS (SPMS) patients were used in the LC+ICE method, in which manual segmentation was used as the ground truth lesion segmentation. The DICE, Sensitivity, Specificity, Area under the ROC curve (AUC), and Volume Similarity measures showed that the LC+ICE method is able to provide a precise and robust lesion segmentation estimate, which was compared with two recent FLAIR lesion segmentation approaches. In addition, the proposed method also showed a stable segmentation among lesion loads, showing a wide applicability to different disease stages. The LC+ICE procedure is a suitable alternative to assist the manual FLAIR hyperintense MS lesion segmentation task.

  17. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  18. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    Science.gov (United States)

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-07

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  19. Iterative solution of multiple radiation and scattering problems in structural acoustics using the BL-QMR algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Malhotra, M. [Stanford Univ., CA (United States)

    1996-12-31

    Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.

  20. A Combination of Modal Synthesis and Subspace Iteration for an Efficient Algorithm for Modal Analysis within a FE-Code

    Directory of Open Access Journals (Sweden)

    M.W. Zehn

    2003-01-01

    Full Text Available Various well-known modal synthesis methods exist in the literature, which are all based upon certain assumptions for the relation of generalised modal co-ordinates with internal modal co-ordinates. If employed in a dynamical FE substructure/superelement technique the generalised modal co-ordinates are represented by the master degrees of freedom (DOF of the master nodes of the substructure. To conduct FE modal analysis the modal synthesis method can be integrated to reduce the number of necessary master nodes or to ease the process of defining additional master points within the structure. The paper presents such a combined method, which can be integrated very efficiently and seamless into a special subspace eigenvalue problem solver with no need to alter the FE system matrices within the FE code. Accordingly, the merits of using the new algorithm are the easy implementation into a FE code, the less effort to carry out modal synthesis, and the versatility in dealing with superelements. The paper presents examples to illustrate the proper work of the algorithm proposed.

  1. Iterative optimization in inverse problems

    CERN Document Server

    Byrne, Charles L

    2014-01-01

    Iterative Optimization in Inverse Problems brings together a number of important iterative algorithms for medical imaging, optimization, and statistical estimation. It incorporates recent work that has not appeared in other books and draws on the author's considerable research in the field, including his recently developed class of SUMMA algorithms. Related to sequential unconstrained minimization methods, the SUMMA class includes a wide range of iterative algorithms well known to researchers in various areas, such as statistics and image processing. Organizing the topics from general to more

  2. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  3. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  4. Iterative algorithm to compute the maximal and stabilising solutions of a general class of discrete-time Riccati-type equations

    Science.gov (United States)

    Dragan, Vasile; Morozan, Toader; Stoica, Adrian-Mihail

    2010-04-01

    In this article an iterative method to compute the maximal solution and the stabilising solution, respectively, of a wide class of discrete-time nonlinear equations on the linear space of symmetric matrices is proposed. The class of discrete-time nonlinear equations under consideration contains, as special cases, different types of discrete-time Riccati equations involved in various control problems for discrete-time stochastic systems. This article may be viewed as an addendum of the work of Dragan and Morozan (Dragan, V. and Morozan, T. (2009), 'A Class of Discrete Time Generalized Riccati Equations', Journal of Difference Equations and Applications, first published on 11 December 2009 (iFirst), doi: 10.1080/10236190802389381) where necessary and sufficient conditions for the existence of the maximal solution and stabilising solution of this kind of discrete-time nonlinear equations are given. The aim of this article is to provide a procedure for numerical computation of the maximal solution and the stabilising solution, respectively, simpler than the method based on the Newton-Kantorovich algorithm.

  5. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    International Nuclear Information System (INIS)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F.

    2011-01-01

    Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (α,β,γ) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate, fast, and completely

  6. ITER safety

    International Nuclear Information System (INIS)

    Raeder, J.; Piet, S.; Buende, R.

    1991-01-01

    As part of the series of publications by the IAEA that summarize the results of the Conceptual Design Activities for the ITER project, this document describes the ITER safety analyses. It contains an assessment of normal operation effluents, accident scenarios, plasma chamber safety, tritium system safety, magnet system safety, external loss of coolant and coolant flow problems, and a waste management assessment, while it describes the implementation of the safety approach for ITER. The document ends with a list of major conclusions, a set of topical remarks on technical safety issues, and recommendations for the Engineering Design Activities, safety considerations for siting ITER, and recommendations with regard to the safety issues for the R and D for ITER. Refs, figs and tabs

  7. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  8. Vascular diameter measurement in CT angiography: comparison of model-based iterative reconstruction and standard filtered back projection algorithms in vitro.

    Science.gov (United States)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2013-03-01

    The purpose of this study was to evaluate the performance of model-based iterative reconstruction (MBIR) in measurement of the inner diameter of models of blood vessels and compare performance between MBIR and a standard filtered back projection (FBP) algorithm. Vascular models with wall thicknesses of 0.5, 1.0, and 1.5 mm were scanned with a 64-MDCT unit and densities of contrast material yielding 275, 396, and 542 HU. Images were reconstructed images by MBIR and FBP, and the mean diameter of each model vessel was measured by software automation. Twenty separate measurements were repeated for each vessel, and variance among the repeated measures was analyzed for determination of measurement error. For all nine model vessels, CT attenuation profiles were compared along a line passing through the luminal center on axial images reconstructed with FBP and MBIR, and the 10-90% edge rise distances at the boundary between the vascular wall and the lumen were evaluated. For images reconstructed with FBP, measurement errors were smallest for models with 1.5-mm wall thickness, except those filled with 275-HU contrast material, and errors grew as the density of the contrast material decreased. Measurement errors with MBIR were comparable to or less than those with FBP. In CT attenuation profiles of images reconstructed with MBIR, the 10-90% edge rise distances at the boundary between the lumen and vascular wall were relatively short for each vascular model compared with those of the profile curves of FBP images. MBIR is better than standard FBP for reducing reconstruction blur and improving the accuracy of diameter measurement at CT angiography.

  9. Rokkasho: Japanese site for ITER

    International Nuclear Information System (INIS)

    Ohtake, S.; Yamaguchi, V.; Matsuda, S.; Kishimoto, H.

    2003-01-01

    The Atomic Energy Commission of Japan authorized ITER as the core machine of the Third Phase Basic Program of Fusion Energy Development. After a series of discussions in the Atomic Energy Commission and the Council of Science and Technology Policy, Japanese Government concluded formally with the Cabinet Agreement on 31 May 2002 that Japan should participate in the ITER Project and offer the Rokkasho-Mura site for construction of ITER to the Negotiations among Canada (CA), the European Union (EU), Japan (JA), and the Russian Federation (RF). The JA site proposal is now under the international assessment in the framework of the ITER Negotiations. (author)

  10. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  11. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  12. Iterative Specialisation of Horn Clauses

    DEFF Research Database (Denmark)

    Nielsen, Christoffer Rosenkilde; Nielson, Flemming; Nielson, Hanne Riis

    2008-01-01

    We present a generic algorithm for solving Horn clauses through iterative specialisation. The algorithm is generic in the sense that it can be instantiated with any decidable fragment of Horn clauses, resulting in a solution scheme for general Horn clauses that guarantees soundness and termination...

  13. Computing Optimal Stationary Policies for Multi-objective Markov Decision Processes

    NARCIS (Netherlands)

    Wiering, M.A.; Jong, E.D. de

    2007-01-01

    This paper describes a novel algorithm called CONMODP for computing Pareto optimal policies for deterministic multi-objective sequential decision problems. CON-MODP is a value iteration based multi-objective dynamic programming algorithm that only computes stationary policies. We observe

  14. Searching with iterated maps.

    Science.gov (United States)

    Elser, V; Rankenburg, I; Thibault, P

    2007-01-09

    In many problems that require extensive searching, the solution can be described as satisfying two competing constraints, where satisfying each independently does not pose a challenge. As an alternative to tree-based and stochastic searching, for these problems we propose using an iterated map built from the projections to the two constraint sets. Algorithms of this kind have been the method of choice in a large variety of signal-processing applications; we show here that the scope of these algorithms is surprisingly broad, with applications as diverse as protein folding and Sudoku.

  15. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  17. Spectrally Compatible Iterative Water Filling

    Science.gov (United States)

    Verlinden, Jan; Bogaert, Etienne Vanden; Bostoen, Tom; Zanier, Francesca; Luise, Marco; Cendrillon, Raphael; Moonen, Marc

    2006-12-01

    Until now static spectrum management has ensured that DSL lines in the same cable are spectrally compatible under worst-case crosstalk conditions. Recently dynamic spectrum management (DSM) has been proposed aiming at an increased capacity utilization by adaptation of the transmit spectra of DSL lines to the actual crosstalk interference. In this paper, a new DSM method for downstream ADSL is derived from the well-known iterative water-filling (IWF) algorithm. The amount of boosting of this new DSM method is limited, such that it is spectrally compatible with ADSL. Hence it is referred to as spectrally compatible iterative water filling (SC-IWF). This paper focuses on the performance gains of SC-IWF. This method is an autonomous DSM method (DSM level 1) and it will be investigated together with two other DSM level-1 algorithms, under various noise conditions, namely, iterative water-filling algorithm, and flat power back-off (flat PBO).

  18. Spectrally Compatible Iterative Water Filling

    Directory of Open Access Journals (Sweden)

    Cendrillon Raphael

    2006-01-01

    Full Text Available Until now static spectrum management has ensured that DSL lines in the same cable are spectrally compatible under worst-case crosstalk conditions. Recently dynamic spectrum management (DSM has been proposed aiming at an increased capacity utilization by adaptation of the transmit spectra of DSL lines to the actual crosstalk interference. In this paper, a new DSM method for downstream ADSL is derived from the well-known iterative water-filling (IWF algorithm. The amount of boosting of this new DSM method is limited, such that it is spectrally compatible with ADSL. Hence it is referred to as spectrally compatible iterative water filling (SC-IWF. This paper focuses on the performance gains of SC-IWF. This method is an autonomous DSM method (DSM level 1 and it will be investigated together with two other DSM level-1 algorithms, under various noise conditions, namely, iterative water-filling algorithm, and flat power back-off (flat PBO.

  19. Q-learning-based adjustable fixed-phase quantum Grover search algorithm

    International Nuclear Information System (INIS)

    Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun

    2017-01-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)

  20. ITER magnets

    International Nuclear Information System (INIS)

    Bottura, L.; Hasegawa, M.; Heim, J.

    1991-01-01

    As part of the summary of the Conceptual Design Activities (CDA) for the International Thermonuclear Experimental Reactor (ITER), this document describes the magnet systems for ITER, including the Toroidal Field (TF) and Poloidal Field (PF) Magnets, the Structural Support System and Cryostat, the Cryogenic System, the TF and PF Power and Protection Systems, and Coil Services and Diagnostics. After an Introduction and Summary, the document discusses the (i) Design Basis, including General Requirements, Design Criteria, Design Philosophy, and the Database (a.o., engineering data on key materials and components), and (ii) the Subsystem Design and Analysis, including Conductor Design, TF Coil and Structure Design, TF Structural Analysis, PF Coil and Structure Design, PF Structural Performance, Fatigue Assessment of Structures, AC Loss Performance, Thermohydraulic Performance, Stability, Cryogenic System, Power Supply Systems, and Coil Services. All magnets are superconducting, (based on Nb 3 Sn) except the Active Control Coils inside the Vacuum Vessel. The fault analysis has been taken to a level consistent with the design definition, showing that the present design meets the requirement for passive safety or can be made to meet it with only minor modifications. A more detailed assessment in this regard is needed but must await further development of the design. In conclusion, the magnet design concepts presently proposed can be developed into an engineering design. Refs, figs and tabs

  1. Preconditioned iterations to calculate extreme eigenvalues

    Energy Technology Data Exchange (ETDEWEB)

    Brand, C.W.; Petrova, S. [Institut fuer Angewandte Mathematik, Leoben (Austria)

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  2. A Modified Iterative Algorithm for Split Feasibility Problems of Right Bregman Strongly Quasi-Nonexpansive Mappings in Banach Spaces with Applications

    Directory of Open Access Journals (Sweden)

    Anantachai Padcharoen

    2016-11-01

    Full Text Available In this paper, we present a new iterative scheme for finding a common element of the solution set F of the split feasibility problem and the fixed point set F ( T of a right Bregman strongly quasi-nonexpansive mapping T in p-uniformly convex Banach spaces which are also uniformly smooth. We prove strong convergence theorem of the sequences generated by our scheme under some appropriate conditions in real p-uniformly convex and uniformly smooth Banach spaces. Furthermore, we give some examples and applications to illustrate our main results in this paper. Our results extend and improve the recent ones of some others in the literature.

  3. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  4. Iterative Adaptive Sampling For Accurate Direct Illumination

    National Research Council Canada - National Science Library

    Donikian, Michael

    2004-01-01

    This thesis introduces a new multipass algorithm, Iterative Adaptive Sampling, for efficiently computing the direct illumination in scenes with many lights, including area lights that cause realistic soft shadows...

  5. ITER council proceedings: 2001

    International Nuclear Information System (INIS)

    2001-01-01

    Continuing the ITER EDA, two further ITER Council Meetings were held since the publication of ITER EDA documentation series no, 20, namely the ITER Council Meeting on 27-28 February 2001 in Toronto, and the ITER Council Meeting on 18-19 July in Vienna. That Meeting was the last one during the ITER EDA. This volume contains records of these Meetings, including: Records of decisions; List of attendees; ITER EDA status report; ITER EDA technical activities report; MAC report and advice; Final report of ITER EDA; and Press release

  6. Precise fixpoint computation through strategy iteration

    DEFF Research Database (Denmark)

    Gawlitza, Thomas; Seidl, Helmut

    2007-01-01

    We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent of the s......We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent...

  7. Full genome virus detection in fecal samples using sensitive nucleic acid preparation, deep sequencing, and a novel iterative sequence classification algorithm

    NARCIS (Netherlands)

    Cotten, Matthew; Oude Munnink, Bas; Canuti, Marta; Deijs, Martin; Watson, Simon J.; Kellam, Paul; van der Hoek, Lia

    2014-01-01

    We have developed a full genome virus detection process that combines sensitive nucleic acid preparation optimised for virus identification in fecal material with Illumina MiSeq sequencing and a novel post-sequencing virus identification algorithm. Enriched viral nucleic acid was converted to

  8. Iterative learning control an optimization paradigm

    CERN Document Server

    Owens, David H

    2016-01-01

    This book develops a coherent theoretical approach to algorithm design for iterative learning control based on the use of optimization concepts. Concentrating initially on linear, discrete-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately because their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates that there are new algorithms that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference signals and also to support new algorithms for local convergence of nonlinear iterative control. Simulation and application studies are used to illustrate algorithm properties and performance in systems like gantry robots and other elect...

  9. A State Feedback Controller Used to Solve an Ill-posed Linear System by a GL(n, R Iterative Algorithm

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2013-11-01

    Full Text Available Starting from a quadratic invariant manifold in terms of the residual vector ${extbf r}={extbf B}{extbf x}-{extbf b}$ for an $n$-dimensional ill-posed linear algebraic equations system ${extbf B}{extbf x}={extbf b}$, we derive an ODEs system for ${extbf x}$ which is equipped with a state feedback controller to enforce the orbit of the state vector ${extbf x}$ on a specified manifold, whose residual-norm is exponentially decayed. To realize the above idea we develop a very powerful implicit scheme based on the novel $GL(n,{mathbb R}$ Lie-group method to integrate the resultant differential algebraic equation (DAE. Through numerical tests of inverse problems we find that the present Lie-group DAE algorithm can significantly accelerate the convergence speed, and is robust enough against the random noise.

  10. Diagnostic accuracy of 256-row multidetector CT coronary angiography with prospective ECG-gating combined with fourth-generation iterative reconstruction algorithm in the assessment of coronary artery bypass: evaluation of dose reduction and image quality.

    Science.gov (United States)

    Ippolito, Davide; Fior, Davide; Franzesi, Cammillo Talei; Riva, Luca; Casiraghi, Alessandra; Sironi, Sandro

    2017-12-01

    Effective radiation dose in coronary CT angiography (CTCA) for coronary artery bypass graft (CABG) evaluation is remarkably high because of long scan lengths. Prospective electrocardiographic gating with iterative reconstruction can reduce effective radiation dose. To evaluate the diagnostic performance of low-kV CT angiography protocol with prospective ecg-gating technique and iterative reconstruction (IR) algorithm in follow-up of CABG patients compared with standard retrospective protocol. Seventy-four non-obese patients with known coronary disease treated with artery bypass grafting were prospectively enrolled. All the patients underwent 256 MDCT (Brilliance iCT, Philips) CTCA using low-dose protocol (100 kV; 800 mAs; rotation time: 0.275 s) combined with prospective ECG-triggering acquisition and fourth-generation IR technique (iDose 4 ; Philips); all the lengths of the bypass graft were included in the evaluation. A control group of 42 similar patients was evaluated with a standard retrospective ECG-gated CTCA (100 kV; 800 mAs).On both CT examinations, ROIs were placed to calculate standard deviation of pixel values and intra-vessel density. Diagnostic quality was also evaluated using a 4-point quality scale. Despite the statistically significant reduction of radiation dose evaluated with DLP (study group mean DLP: 274 mGy cm; control group mean DLP: 1224 mGy cm; P value development of high-speed MDCT scans combined with modern IR allows an accurate evaluation of CABG with prospective ECG-gating protocols in a single breath hold, obtaining a significant reduction in radiation dose.

  11. ITER council proceedings: 1998

    International Nuclear Information System (INIS)

    1999-01-01

    This volume contains documents of the 13th and the 14th ITER council meeting as well as of the 1st extraordinary ITER council meeting. Documents of the ITER meetings held in Vienna and Yokohama during 1998 are also included. The contents include an outline of the ITER objectives, the ITER parameters and design overview as well as operating scenarios and plasma performance. Furthermore, design features, safety and environmental characteristics are given

  12. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  13. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2001-01-01

    We present an algorithm for computing a near-optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...... be off. The systems' semi-Markov decision model has infinite action sets for the positive states. We design a new tailor-made policy-iteration algorithm for computing a policy for which the long-run average cost is at most a positive tolerance above the minimum average cost. For any positive tolerance...... our algorithm computes the desired policy in a finite (and small) number of iterations. The number is five for the numerical example used in the paper to illustrate results obtained by the algorithm....

  14. ITER Council proceedings: 1993

    International Nuclear Information System (INIS)

    1994-01-01

    Records of the third ITER Council Meeting (IC-3), held on 21-22 April 1993, in Tokyo, Japan, and the fourth ITER Council Meeting (IC-4) held on 29 September - 1 October 1993 in San Diego, USA, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA), such as the text of the draft of Protocol 2 further elaborated in ''ITER EDA Agreement and Protocol 2'' (ITER EDA Documentation Series No. 5), recommendations on future work programmes: a description of technology R and D tasks; the establishment of a trust fund for the ITER EDA activities; arrangements for Visiting Home Team Personnel; the general framework for the involvement of other countries in the ITER EDA; conditions for the involvement of Canada in the Euratom Contribution to the ITER EDA; and other attachments as parts of the Records of Decision of the aforementioned ITER Council Meetings

  15. ITER council proceedings: 2000

    International Nuclear Information System (INIS)

    2001-01-01

    No ITER Council Meetings were held during 2000. However, two ITER EDA Meetings were held, one in Tokyo, January 19-20, and one in Moscow, June 29-30. The parties participating in these meetings were those that partake in the extended ITER EDA, namely the EU, the Russian Federation, and Japan. This document contains, a/o, the records of these meetings, the list of attendees, the agenda, the ITER EDA Status Reports issued during these meetings, the TAC (Technical Advisory Committee) reports and recommendations, the MAC Reports and Advice (also for the July 1999 Meeting), the ITER-FEAT Outline Design Report, the TAC Reports and Recommendations both meetings), Site requirements and Site Design Assumptions, the Tentative Sequence of technical Activities 2000-2001, Report of the ITER SWG-P2 on Joint Implementation of ITER, EU/ITER Canada Proposal for New ITER Identification

  16. Instance-based Policy Learning by Real-coded Genetic Algorithms and Its Application to Control of Nonholonomic Systems

    Science.gov (United States)

    Miyamae, Atsushi; Sakuma, Jun; Ono, Isao; Kobayashi, Shigenobu

    The stabilization control of nonholonomic systems have been extensively studied because it is essential for nonholonomic robot control problems. The difficulty in this problem is that the theoretical derivation of control policy is not necessarily guaranteed achievable. In this paper, we present a reinforcement learning (RL) method with instance-based policy (IBP) representation, in which control policies for this class are optimized with respect to user-defined cost functions. Direct policy search (DPS) is an approach for RL; the policy is represented by parametric models and the model parameters are directly searched by optimization techniques including genetic algorithms (GAs). In IBP representation an instance consists of a state and an action pair; a policy consists of a set of instances. Several DPSs with IBP have been previously proposed. In these methods, sometimes fail to obtain optimal control policies when state-action variables are continuous. In this paper, we present a real-coded GA for DPSs with IBP. Our method is specifically designed for continuous domains. Optimization of IBP has three difficulties; high-dimensionality, epistasis, and multi-modality. Our solution is designed for overcoming these difficulties. The policy search with IBP representation appears to be high-dimensional optimization; however, instances which can improve the fitness are often limited to active instances (instances used for the evaluation). In fact, the number of active instances is small. Therefore, we treat the search problem as a low dimensional problem by restricting search variables only to active instances. It has been commonly known that functions with epistasis can be efficiently optimized with crossovers which satisfy the inheritance of statistics. For efficient search of IBP, we propose extended crossover-like mutation (extended XLM) which generates a new instance around an instance with satisfying the inheritance of statistics. For overcoming multi-modality, we

  17. Parallel island genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    In this work, we focus the application of an Island Genetic Algorithm (IGA), a coarse-grained parallel genetic algorithm (PGA) model, to a Nuclear Power Plant (NPP) Auxiliary Feedwater System (AFWS) surveillance tests policy optimization. Here, the main objective is to outline, by means of comparisons, the advantages of the IGA over the simple (non-parallel) genetic algorithm (GA), which has been successfully applied in the solution of such kind of problem. The goal of the optimization is to maximize the system's average availability for a given period of time, considering realistic features such as: i) aging effects on standby components during the tests; ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; iii) components have distinct test parameters (outage time, aging factors, etc.) and iv) tests are not necessarily periodic. In our experiments, which were made in a cluster comprised by 8 1-GHz personal computers, we could clearly observe gains not only in the computational time, which reduced linearly with the number of computers, but in the optimization outcome

  18. Diagnostic Performance of an Advanced Modeled Iterative Reconstruction Algorithm for Low-Contrast Detectability with a Third-Generation Dual-Source Multidetector CT Scanner: Potential for Radiation Dose Reduction in a Multireader Study.

    Science.gov (United States)

    Solomon, Justin; Mileto, Achille; Ramirez-Giraldo, Juan Carlos; Samei, Ehsan

    2015-06-01

    To assess the effect of radiation dose reduction on low-contrast detectability by using an advanced modeled iterative reconstruction (ADMIRE; Siemens Healthcare, Forchheim, Germany) algorithm in a contrast-detail phantom with a third-generation dual-source multidetector computed tomography (CT) scanner. A proprietary phantom with a range of low-contrast cylindrical objects, representing five contrast levels (range, 5-20 HU) and three sizes (range, 2-6 mm) was fabricated with a three-dimensional printer and imaged with a third-generation dual-source CT scanner at various radiation dose index levels (range, 0.74-5.8 mGy). Image data sets were reconstructed by using different section thicknesses (range, 0.6-5.0 mm) and reconstruction algorithms (filtered back projection [FBP] and ADMIRE with a strength range of three to five). Eleven independent readers blinded to technique and reconstruction method assessed all data sets in two reading sessions by measuring detection accuracy with a two-alternative forced choice approach (first session) and by scoring the total number of visible object groups (second session). Dose reduction potentials based on both reading sessions were estimated. Results between FBP and ADMIRE were compared by using both paired t tests and analysis of variance tests at the 95% significance level. During the first session, detection accuracy increased with increasing contrast, size, and dose index (diagnostic accuracy range, 50%-87%; interobserver variability, ±7%). When compared with FBP, ADMIRE improved detection accuracy by 5.2% on average across the investigated variables (P material is available for this article. RSNA, 2015

  19. Off-Policy Integral Reinforcement Learning Method to Solve Nonlinear Continuous-Time Multiplayer Nonzero-Sum Games.

    Science.gov (United States)

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai

    2017-03-01

    This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.

  20. ITER council proceedings: 1995

    International Nuclear Information System (INIS)

    1996-01-01

    Records of the 8. ITER Council Meeting (IC-8), held on 26-27 July 1995, in San Diego, USA, and the 9. ITER Council Meeting (IC-9) held on 12-13 December 1995, in Garching, Germany, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the ITER Interim Design Report Package and Relevant Documents. Figs, tabs

  1. ITER EDA technical activities

    International Nuclear Information System (INIS)

    Aymar, R.

    1998-01-01

    Six years of technical work under the ITER EDA Agreement have resulted in a design which constitutes a complete description of the ITER device and of its auxiliary systems and facilities. The ITER Council commented that the Final Design Report provides the first comprehensive design of a fusion reactor based on well established physics and technology

  2. ITER radio frequency systems

    International Nuclear Information System (INIS)

    Bosia, G.

    1998-01-01

    Neutral Beam Injection and RF heating are two of the methods for heating and current drive in ITER. The three ITER RF systems, which have been developed during the EDA, offer several complementary services and are able to fulfil ITER operational requirements

  3. ITER council proceedings: 1999

    International Nuclear Information System (INIS)

    1999-01-01

    In 1999 the ITER meeting in Cadarache (10-11 March 1999) and the Programme Directors Meeting in Grenoble (28-29 July 1999) took place. Both meetings were exclusively devoted to ITER engineering design activities and their agendas covered all issues important for the development of ITER. This volume presents the documents of these two important meetings

  4. ITER council proceedings: 1996

    International Nuclear Information System (INIS)

    1997-01-01

    Records of the 10. ITER Council Meeting (IC-10), held on 26-27 July 1996, in St. Petersburg, Russia, and the 11. ITER Council Meeting (IC-11) held on 17-18 December 1996, in Tokyo, Japan, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the cost review and safety analysis. Figs, tabs

  5. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  6. A novel iterative scheme and its application to differential equations.

    Science.gov (United States)

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  7. ITER-FEAT safety

    International Nuclear Information System (INIS)

    Gordon, C.W.; Bartels, H.-W.; Honda, T.; Raeder, J.; Topilski, L.; Iseli, M.; Moshonas, K.; Taylor, N.; Gulden, W.; Kolbasov, B.; Inabe, T.; Tada, E.

    2001-01-01

    Safety has been an integral part of the design process for ITER since the Conceptual Design Activities of the project. The safety approach adopted in the ITER-FEAT design and the complementary assessments underway, to be documented in the Generic Site Safety Report (GSSR), are expected to help demonstrate the attractiveness of fusion and thereby set a good precedent for future fusion power reactors. The assessments address ITER's radiological hazards taking into account fusion's favourable safety characteristics. The expectation that ITER will need regulatory approval has influenced the entire safety design and assessment approach. This paper summarises the ITER-FEAT safety approach and assessments underway. (author)

  8. ITER council proceedings: 1997

    International Nuclear Information System (INIS)

    1997-01-01

    This volume of the ITER EDA Documentation Series presents records of the 12th ITER Council Meeting, IC-12, which took place on 23-24 July, 1997 in Tampere, Finland. The Council received from the Parties (EU, Japan, Russia, US) positive responses on the Detailed Design Report. The Parties stated their willingness to contribute to fulfil their obligations in contributing to the ITER EDA. The summary discussions among the Parties led to the consensus that in July 1998 the ITER activities should proceed for additional three years with a general intent to enable an efficient start of possible, future ITER construction

  9. Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”

    Directory of Open Access Journals (Sweden)

    Ji-Huan He

    2012-01-01

    boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.

  10. Iterative Algorithms for Ptychographic Phase Retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chao; Qian, Jianliang; Schirotzek, Andre; Maia, Filipe; Marchesini, Stefano

    2011-05-03

    Ptychography promises diffraction limited resolution without the need for high resolution lenses. To achieve high resolution one has to solve the phase problem for many partially overlapping frames. Here we review some of the existing methods for solving ptychographic phase retrieval problem from a numerical analysis point of view, and propose alternative methods based on numerical optimization.

  11. A non-linear mapping algorithm shaping the control policy of a bidirectional brain machine interface.

    Science.gov (United States)

    Boi, Fabio; Semprini, Marianna; Vato, Alessandro

    2016-08-01

    Motor brain-machine interfaces (BMIs) transform neural activities recorded directly from the brain into motor commands to control the movements of an external object by establishing an interface between the central nervous system (CNS) and the device. Bidirectional BMIs are closed-loop systems that add a sensory channel to provide the brain with an artificial feedback signal produced by the interaction between the device and the external world. Taking inspiration from the functioning of the spinal cord in mammalians, in our previous works we designed and developed a bidirectional BMI that uses the neural signals recorded form rats' motor cortex to control the movement of an external object. We implemented a decoding interface based on the approximation of a predefined force field with a central attractor point. Now we consider a non-linear transformation that allows to design a decoder approximating force fields with arbitrary attractors. We describe here the non-linear mapping algorithm and preliminary results of its use with behaving rats.

  12. Variational iteration method for Bratu-like equation arising in electrospinning.

    Science.gov (United States)

    He, Ji-Huan; Kong, Hai-Yan; Chen, Rou-Xi; Hu, Ming-sheng; Chen, Qiao-ling

    2014-05-25

    This paper points out that the so called enhanced variational iteration method (Colantoni & Boubaker, 2014) for a nonlinear equation arising in electrospinning and vibration-electrospinning process is the standard variational iteration method. An effective algorithm using the variational iteration algorithm-II is suggested for Bratu-like equation arising in electrospinning. A suitable choice of initial guess results in a relatively accurate solution by one or few iteration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    at the decoder side offering such benefits for these applications. Although there have been some advanced improvement techniques, improving the DVC coding efficiency is still challenging. The thesis addresses this challenge by proposing several iterative algorithms at different working levels, e.g. bitplane...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...

  14. ITER test programme

    International Nuclear Information System (INIS)

    Abdou, M.; Baker, C.; Casini, G.

    1991-01-01

    ITER has been designed to operate in two phases. The first phase which lasts for 6 years, is devoted to machine checkout and physics testing. The second phase lasts for 8 years and is devoted primarily to technology testing. This report describes the technology test program development for ITER, the ancillary equipment outside the torus necessary to support the test modules, the international collaboration aspects of conducting the test program on ITER, the requirements on the machine major parameters and the R and D program required to develop the test modules for testing in ITER. 15 refs, figs and tabs

  15. New algorithms for the symmetric tridiagonal eigenvalue computation

    Energy Technology Data Exchange (ETDEWEB)

    Pan, V. [City Univ. of New York, Bronx, NY (United States)]|[International Computer Sciences Institute, Berkeley, CA (United States)

    1994-12-31

    The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.

  16. Delay Analysis of Max-Weight Queue Algorithm for Time-Varying Wireless Ad hoc Networks—Control Theoretical Approach

    Science.gov (United States)

    Chen, Junting; Lau, Vincent K. N.

    2013-01-01

    Max weighted queue (MWQ) control policy is a widely used cross-layer control policy that achieves queue stability and a reasonable delay performance. In most of the existing literature, it is assumed that optimal MWQ policy can be obtained instantaneously at every time slot. However, this assumption may be unrealistic in time varying wireless systems, especially when there is no closed-form MWQ solution and iterative algorithms have to be applied to obtain the optimal solution. This paper investigates the convergence behavior and the queue delay performance of the conventional MWQ iterations in which the channel state information (CSI) and queue state information (QSI) are changing in a similar timescale as the algorithm iterations. Our results are established by studying the stochastic stability of an equivalent virtual stochastic dynamic system (VSDS), and an extended Foster-Lyapunov criteria is applied for the stability analysis. We derive a closed form delay bound of the wireless network in terms of the CSI fading rate and the sensitivity of MWQ policy over CSI and QSI. Based on the equivalent VSDS, we propose a novel MWQ iterative algorithm with compensation to improve the tracking performance. We demonstrate that under some mild conditions, the proposed modified MWQ algorithm converges to the optimal MWQ control despite the time-varying CSI and QSI.

  17. ITER Plasma Control System Development

    Science.gov (United States)

    Snipes, Joseph; ITER PCS Design Team

    2015-11-01

    The development of the ITER Plasma Control System (PCS) continues with the preliminary design phase for 1st plasma and early plasma operation in H/He up to Ip = 15 MA in L-mode. The design is being developed through a contract between the ITER Organization and a consortium of plasma control experts from EU and US fusion laboratories, which is expected to be completed in time for a design review at the end of 2016. This design phase concentrates on breakdown including early ECH power and magnetic control of the poloidal field null, plasma current, shape, and position. Basic kinetic control of the heating (ECH, ICH, NBI) and fueling systems is also included. Disruption prediction, mitigation, and maintaining stable operation are also included because of the high magnetic and kinetic stored energy present already for early plasma operation. Support functions for error field topology and equilibrium reconstruction are also required. All of the control functions also must be integrated into an architecture that will be capable of the required complexity of all ITER scenarios. A database is also being developed to collect and manage PCS functional requirements from operational scenarios that were defined in the Conceptual Design with links to proposed event handling strategies and control algorithms for initial basic control functions. A brief status of the PCS development will be presented together with a proposed schedule for design phases up to DT operation.

  18. Impact of the algorithm of iterative reconstruction ASIR in the CTDI of studies in TCHMC; Impacto del algoritmo de reconstruccion iterativa ASIR en el CTDI de los estudios en TCHMC

    Energy Technology Data Exchange (ETDEWEB)

    Ambroa Rey, E. M.; Vazquez Vazquez, R.; Gimenez Insua, M.; Sanchez Garcia, M.; Otero Martinez, C.; Luna Vega, V.; Mosquera Sueiro, J.; Lobato Busto, R.; Pombar Camean, M.

    2013-07-01

    The objective of this work is to make a comparison of the doses in the 10 protocols most commonly used in our Center, before and after the commissioning of the software ASIR (Adaptive statistical iterative reconstruction). (Author)

  19. Leapfrog variants of iterative methods for linear algebra equations

    Science.gov (United States)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  20. United States rejoin ITER

    International Nuclear Information System (INIS)

    Roberts, M.

    2003-01-01

    Upon pressure from the United States Congress, the US Department of Energy had to withdraw from further American participation in the ITER Engineering Design Activities after the end of its commitment to the EDA in July 1998. In the years since that time, changes have taken place in both the ITER activity and the US fusion community's position on burning plasma physics. Reflecting the interest in the United States in pursuing burning plasma physics, the DOE's Office of Science commissioned three studies as part of its examination of the option of entering the Negotiations on the Agreement on the Establishment of the International Fusion Energy Organization for the Joint Implementation of the ITER Project. These were a National Academy Review Panel Report supporting the burning plasma mission; a Fusion Energy Sciences Advisory Committee (FESAC) report confirming the role of ITER in achieving fusion power production, and The Lehman Review of the ITER project costing and project management processes (for the latter one, see ITER CTA Newsletter, no. 15, December 2002). All three studies have endorsed the US return to the ITER activities. This historical decision was announced by DOE Secretary Abraham during his remarks to employees of the Department's Princeton Plasma Physics Laboratory. The United States will be working with the other Participants in the ITER Negotiations on the Agreement and is preparing to participate in the ITA

  1. ITER at Cadarache; ITER a Cadarache

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-06-15

    This public information document presents the ITER project (International Thermonuclear Experimental Reactor), the definition of the fusion, the international cooperation and the advantages of the project. It presents also the site of Cadarache, an appropriate scientifical and economical environment. The last part of the documentation recalls the historical aspect of the project and the today mobilization of all partners. (A.L.B.)

  2. ITER council proceedings: 1992

    International Nuclear Information System (INIS)

    1994-01-01

    At the signing of the ITER EDA Agreement on July, 1992, each of the Parties presented to the Director General the names of their designated members of the ITER Council. Upon receiving those names, the Director General stated that the ITER Engineering Design Activities were ''ready to begin''. The next step in this process was the convening of the first meeting of the ITER Council. The first meeting of the Council, held in Vienna, was opened by Director General Hans Blix. The second meeting was held in Moscow, the formal seat of the Council. This volume presents records of these first two Council meetings and, together with the previous volumes on the text of the Agreement and Protocol 1 and the preparations for their signing respectively, represents essential information on the evolution of the ITER EDA

  3. ITER CTA newsletter. No. 3

    International Nuclear Information System (INIS)

    2001-11-01

    This ITER CTA newsletter comprises reports of Dr. P. Barnard, Iter Canada Chairman and CEO, about the progress of the first formal ITER negotiations and about the demonstration of details of Canada's bid on ITER workshops, and Dr. V. Vlasenkov, Project Board Secretary, about the meeting of the ITER CTA project board

  4. Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data.

    Science.gov (United States)

    Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun

    2017-03-01

    H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.

  5. Adaptable Iterative and Recursive Kalman Filter Schemes

    Science.gov (United States)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  6. Model-based iterative learning control applied to an industrial robot with elasticity

    NARCIS (Netherlands)

    Hakvoort, Wouter; Aarts, Ronald G.K.M.; van Dijk, Johannes; Jonker, Jan B.; IEEE,

    2007-01-01

    In this paper model-based Iterative Learning Control (ILC) is applied to improve the tracking accuracy of an industrial robot with elasticity. The ILC algorithm iteratively updates the reference trajectory for the robot such that the predicted tracking error in the next iteration is minimised. The

  7. ITER days in Moscow

    International Nuclear Information System (INIS)

    Golubchikov, L.

    2001-01-01

    In connection with the successful completion of the Engineering Design of the International Thermonuclear Reactor (ITER) and the 50th anniversary of fusion research in the USSR, the Ministry of the Russian Federation for Atomic Energy (Minatom) with the participation of the Russian Academy of Sciences, organized the International Symposium 'ITER days in Moscow' on 7-8 June 2001. About 250 people from more than 20 states took part in the Meeting. The participants welcomed the R and D results of the ITER project and considered it as a necessary step to establish a basis for a fusion energy source. There were also some scientific presentations on the following topics: ITER physics basis; Effect of fusion research on general physics; Fusion power reactors; US interests in burning plasma

  8. ITER definition phase

    International Nuclear Information System (INIS)

    1989-01-01

    The International Thermonuclear Experimental Reactor (ITER) is envisioned as a fusion device which would demonstrate the scientific and technological feasibility of fusion power. As a first step towards achieving this goal, the European Community, Japan, the Soviet Union, and the United States of America have entered into joint conceptual design activities under the auspices of the International Atomic Energy Agency. A brief summary of the Definition Phase of ITER activities is contained in this report. Included in this report are the background, objectives, organization, definition phase activities, and research and development plan of this endeavor in international scientific collaboration. A more extended technical summary is contained in the two-volume report, ''ITER Concept Definition,'' IAEA/ITER/DS/3. 2 figs, 2 tabs

  9. Power converters for ITER

    CERN Document Server

    Benfatto, I

    2006-01-01

    The International Thermonuclear Experimental Reactor (ITER) is a thermonuclear fusion experiment designed to provide long deuterium– tritium burning plasma operation. After a short description of ITER objectives, the main design parameters and the construction schedule, the paper describes the electrical characteristics of the French 400 kV grid at Cadarache: the European site proposed for ITER. Moreover, the paper describes the main requirements and features of the power converters designed for the ITER coil and additional heating power supplies, characterized by a total installed power of about 1.8 GVA, modular design with basic units up to 90 MVA continuous duty, dc currents up to 68 kA, and voltages from 1 kV to 1 MV dc.

  10. Off-policy integral reinforcement learning optimal tracking control for continuous-time chaotic systems

    International Nuclear Information System (INIS)

    Wei Qing-Lai; Song Rui-Zhuo; Xiao Wen-Dong; Sun Qiu-Ye

    2015-01-01

    This paper estimates an off-policy integral reinforcement learning (IRL) algorithm to obtain the optimal tracking control of unknown chaotic systems. Off-policy IRL can learn the solution of the HJB equation from the system data generated by an arbitrary control. Moreover, off-policy IRL can be regarded as a direct learning method, which avoids the identification of system dynamics. In this paper, the performance index function is first given based on the system tracking error and control error. For solving the Hamilton–Jacobi–Bellman (HJB) equation, an off-policy IRL algorithm is proposed. It is proven that the iterative control makes the tracking error system asymptotically stable, and the iterative performance index function is convergent. Simulation study demonstrates the effectiveness of the developed tracking control method. (paper)

  11. ITER EDA and technology

    International Nuclear Information System (INIS)

    Baker, C.C.

    2001-01-01

    The year 1998 was the culmination of the six-year Engineering Design Activities (EDA) of the International Thermonuclear Experimental Reactor (ITER) Project. The EDA results in design and validating technology R and D, plus the associated effort in voluntary physics research, is a significant achievement and major milestone in the history of magnetic fusion energy development. Consequently, the ITER EDA was a major theme at this Conference, contributing almost 40 papers

  12. ITER explorations started

    International Nuclear Information System (INIS)

    Golubchikov, L.

    2000-01-01

    Opening this first Explorers' Meeting, Minister Adamov welcomed the participants, thanked the ITER parties for their positive response to his invitation and expressed the desire of the Russian Federation to see ITER realized, stressing the importance of continued progress with the project as an outstanding example of international scientific co-operation. During the meeting, the exploration tasks were discussed and agreed upon, as well as the work plan and schedule

  13. Iterative solution of a nonlinear system arising in phase change problems

    International Nuclear Information System (INIS)

    Williams, M.A.

    1987-01-01

    We consider several iterative methods for solving the nonlinear system arising from an enthalpy formulation of a phase change problem. We present the formulation of the problem. Implicit discretization of the governing equations results in a mildly nonlinear system at each time step. We discuss solving this system using Jacobi, Gauss-Seidel, and SOR iterations and a new modified preconditioned conjugate gradient (MPCG) algorithm. The new MPCG algorithm and its properties are discussed in detail. Numerical results are presented comparing the performance of the SOR algorithm and the MPCG algorithm with 1-step SSOR preconditioning. The MPCG algorithm exhibits a superlinear rate of convergence. The SOR algorithm exhibits a linear rate of convergence. Thus, the MPCG algorithm requires fewer iterations to converge than the SOR algorithm. However in most cases, the SOR algorithm requires less total computation time than the MPCG algorithm. Hence, the SOR algorithm appears to be more appropriate for the class of problems considered. 27 refs., 11 figs

  14. ITER Status and Plans

    Science.gov (United States)

    Greenfield, Charles M.

    2017-10-01

    The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.

  15. ITER CTA newsletter. No. 2

    International Nuclear Information System (INIS)

    2001-10-01

    This ITER CTA newsletter contains results of the ITER toroidal field model coil project presented by ITER EU Home Team (Garching) and an article in commemoration of the late Dr. Charles Maisonnier, one of the former leaders of ITER who made significant contributions to its development

  16. Lifted system iterative learning control applied to an industrial robot

    NARCIS (Netherlands)

    Hakvoort, Wouter; Aarts, Ronald G.K.M.; van Dijk, Johannes; Jonker, Jan B.

    2008-01-01

    This paper proposes a model-based iterative learning control algorithm for time-varying systems with a high convergence speed. The convergence of components of the tracking error can be controlled individually with the algorithm. The convergence speed of each error component can be maximised unless

  17. ITER tokamak device

    International Nuclear Information System (INIS)

    Doggett, J.; Salpietro, E.; Shatalov, G.

    1991-01-01

    The results of the Conceptual Design Activities for the International Thermonuclear Experimental Reactor (ITER) are summarized. These activities, carried out between April 1988 and December 1990, produced a consistent set of technical characteristics and preliminary plans for co-ordinated research and development support of ITER; and a conceptual design, a description of design requirements and a preliminary construction schedule and cost estimate. After a description of the design basis, an overview is given of the tokamak device, its auxiliary systems, facility and maintenance. The interrelation and integration of the various subsystems that form the ITER tokamak concept are discussed. The 16 ITER equatorial port allocations, used for nuclear testing, diagnostics, fuelling, maintenance, and heating and current drive, are given, as well as a layout of the reactor building. Finally, brief descriptions are given of the major ITER sub-systems, i.e., (i) magnet systems (toroidal and poloidal field coils and cryogenic systems), (ii) containment structures (vacuum and cryostat vessels, machine gravity supports, attaching locks, passive loops and active coils), (iii) first wall, (iv) divertor plate (design and materials, performance and lifetime, a.o.), (v) blanket/shield system, (vi) maintenance equipment, (vii) current drive and heating, (viii) fuel cycle system, and (ix) diagnostics. 11 refs, figs and tabs

  18. Twelfth ITER negotiation meeting

    International Nuclear Information System (INIS)

    2006-01-01

    Delegations from China, European Union, Japan, the Republic of Korea, the Russian Federation and the United States of America gathered on Jeju Island, Korea, on 6 December 2005, to complete their negotiations on an Agreement on the joint implementation of the ITER international fusion energy project. At the start of the Meeting, the Delegations unanimously and enthusiastically welcomed India as a full Party to the ITER venture. A Delegation from India then joined the Meeting and participated fully in the discussions that followed. The seven ITER Delegations also welcomed to the Meeting the newly designated Nominee Director-General for the prospective ITER Organization, Ambassador Kaname Ikeda, who is to take up his duties as leader of the project. Based on the results of intensive working level meetings held throughout the previous week, the Delegations have succeeded in clearing the remaining key issues such as decision-making, intellectual property and management within the prospective ITER Organization and adjustments to the sharing of resources as a result of India's participation, including in particular cost sharing and in-kind contributions, leaving only a few legal points requiring resolution during the final lawyers' meeting to review the text for coherence and internal consistency

  19. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  20. Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty

    Science.gov (United States)

    Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design

  1. A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm for smart generation control of interconnected complex power grids

    International Nuclear Information System (INIS)

    Xi, Lei; Yu, Tao; Yang, Bo; Zhang, Xiaoshun

    2015-01-01

    Highlights: • Proposing a decentralized smart generation control scheme for the automatic generation control coordination. • A novel multi-agent learning algorithm is developed to resolve stochastic control problems in power systems. • A variable learning rate are introduced base on the framework of stochastic games. • A simulation platform is developed to test the performance of different algorithms. - Abstract: This paper proposes a multi-agent smart generation control scheme for the automatic generation control coordination in interconnected complex power systems. A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm is developed, which can effectively identify the optimal average policies via a variable learning rate under various operation conditions. Based on control performance standards, the proposed approach is implemented in a flexible multi-agent stochastic dynamic game-based smart generation control simulation platform. Based on the mixed strategy and average policy, it is highly adaptive in stochastic non-Markov environments and large time-delay systems, which can fulfill automatic generation control coordination in interconnected complex power systems in the presence of increasing penetration of decentralized renewable energy. Two case studies on both a two-area load–frequency control power system and the China Southern Power Grid model have been done. Simulation results verify that multi-agent smart generation control scheme based on the proposed approach can obtain optimal average policies thus improve the closed-loop system performances, and can achieve a fast convergence rate with significant robustness compared with other methods

  2. Iterative total-variation reconstruction versus weighted filtered-backprojection reconstruction with edge-preserving filtering

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Li Ya; Zamyatin, Alex

    2013-01-01

    Iterative image reconstruction with the total-variation (TV) constraint has become an active research area in recent years, especially in x-ray CT and MRI. Based on Green's one-step-late algorithm, this paper develops a transmission noise weighted iterative algorithm with a TV prior. This paper compares the reconstructions from this iterative TV algorithm with reconstructions from our previously developed non-iterative reconstruction method that consists of a noise-weighted filtered backprojection (FBP) reconstruction algorithm and a nonlinear edge-preserving post filtering algorithm. This paper gives a mathematical proof that the noise-weighted FBP provides an optimal solution. The results from both methods are compared using clinical data and computer simulation data. The two methods give comparable image quality, while the non-iterative method has the advantage of requiring much shorter computation times. (paper)

  3. Earthly sun called ITER

    International Nuclear Information System (INIS)

    Pozdeyev, Mikhail

    2002-01-01

    Full text: Participating in the film are Academicians Velikhov and Glukhikh, Mr. Filatof, ITER Director from Russia, Mr. Sannikov from Kurchatov Institute. The film tells about the starting point of the project (Mr. Lavrentyev), the pioneers of the project (Academicians Tamme, Sakharov, Artsimovich) and about the situation the project is standing now. Participating in [ITER now are the US, Russia, Japan and the European Union. There are two associated members as well - Kazakhstan and Canada. By now the engineering design phase has been finished. Computer animation used in the video gives us the idea how the first thermonuclear reactor based on famous Russian TOKOMAK works. (author)

  4. Iterated multidimensional wave conversion

    International Nuclear Information System (INIS)

    Brizard, A. J.; Tracy, E. R.; Johnston, D.; Kaufman, A. N.; Richardson, A. S.; Zobin, N.

    2011-01-01

    Mode conversion can occur repeatedly in a two-dimensional cavity (e.g., the poloidal cross section of an axisymmetric tokamak). We report on two novel concepts that allow for a complete and global visualization of the ray evolution under iterated conversions. First, iterated conversion is discussed in terms of ray-induced maps from the two-dimensional conversion surface to itself (which can be visualized in terms of three-dimensional rooms). Second, the two-dimensional conversion surface is shown to possess a symplectic structure derived from Dirac constraints associated with the two dispersion surfaces of the interacting waves.

  5. Physics fundamentals for ITER

    International Nuclear Information System (INIS)

    Rosenbluth, M.N.

    1999-01-01

    The design of an experimental thermonuclear reactor requires both cutting-edge technology and physics predictions precise enough to carry forward the design. The past few years of worldwide physics studies have seen great progress in understanding, innovation and integration. We will discuss this progress and the remaining issues in several key physics areas. (1) Transport and plasma confinement. A worldwide database has led to an 'empirical scaling law' for tokamaks which predicts adequate confinement for the ITER fusion mission, albeit with considerable but acceptable uncertainty. The ongoing revolution in computer capabilities has given rise to new gyrofluid and gyrokinetic simulations of microphysics which may be expected in the near future to attain predictive accuracy. Important databases on H-mode characteristics and helium retention have also been assembled. (2) Divertors, heat removal and fuelling. A novel concept for heat removal - the radiative, baffled, partially detached divertor - has been designed for ITER. Extensive two-dimensional (2D) calculations have been performed and agree qualitatively with recent experiments. Preliminary studies of the interaction of this configuration with core confinement are encouraging and the success of inside pellet launch provides an attractive alternative fuelling method. (3) Macrostability. The ITER mission can be accomplished well within ideal magnetohydrodynamic (MHD) stability limits, except for internal kink modes. Comparisons with JET, as well as a theoretical model including kinetic effects, predict such sawteeth will be benign in ITER. Alternative scenarios involving delayed current penetration or off-axis current drive may be employed if required. The recent discovery of neoclassical beta limits well below ideal MHD limits poses a threat to performance. Extrapolation to reactor scale is as yet unclear. In theory such modes are controllable by current drive profile control or feedback and experiments should

  6. FAST ITERATIVE KILOVOLTAGE CONE BEAM TOMOGRAPHY

    Directory of Open Access Journals (Sweden)

    S. A. Zolotarev

    2015-01-01

    Full Text Available Creating a fast parallel iterative tomographic algorithms based on the use of graphics accelerators, which simultaneously provide the minimization of residual and total variation of the reconstructed image is an important and urgent task, which is of great scientific and practical importance. Such algorithms can be used, for example, in the implementation of radiation therapy patients, because it is always done pre-computed tomography of patients in order to better identify areas which can then be subjected to radiation exposure. 

  7. ITER conceptual design

    International Nuclear Information System (INIS)

    Tomabechi, K.; Gilleland, J.R.; Sokolov, Yu.A.; Toschi, R.

    1991-01-01

    The Conceptual Design Activities of the International Thermonuclear Experimental Reactor (ITER) were carried out jointly by the European Community, Japan, the Soviet Union and the United States of America, under the auspices of the International Atomic Energy Agency. The European Community provided the site for joint work sessions at the Max-Planck-Institut fuer Plasmaphysik in Garching, Germany. The Conceptual Design Activities began in the spring of 1988 and ended in December 1990. The objectives of the activities were to develop the design of ITER, to perform a safety and environmental analysis, to define the site requirements as well as the future research and development needs, to estimate the cost and manpower, and to prepare a schedule for detailed engineering design, construction and operation. On the basis of the investigation and analysis performed, a concept of ITER was developed which incorporated maximum flexibility of the performance of the device and allowed a variety of operating scenarios to be adopted. The heart of the machine is a tokamak having a plasma major radius of 6 m, a plasma minor radius of 2.15 m, a nominal plasma current of 22 MA and a nominal fusion power of 1 GW. The conceptual design can meet the technical objectives of the ITER programme. Because of the success of the Conceptual Design Activities, the Parties are now considering the implementation of the next phase, called the Engineering Design Activities. (author). Refs, figs and tabs

  8. ITER power electrical networks

    International Nuclear Information System (INIS)

    Sejas Portela, S.

    2011-01-01

    The ITER project (International Thermonuclear Experimental Reactor) is an international effort to research and development to design, build and operate an experimental facility to demonstrate the scientific and technological possibility of obtaining useful energy from the physical phenomenon known as nuclear fusion.

  9. ITER-FEAT operation

    International Nuclear Information System (INIS)

    Shimomura, Y.; Huget, M.; Mizoguchi, T.; Murakami, Y.; Polevoi, A.; Shimada, M.; Aymar, R.; Chuyanov, V.; Matsumoto, H.

    2001-01-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first 10 years' operation will be devoted primarily to physics issues at low neutron fluence and the following 10 years' operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes such as inductive high Q modes, long pulse hybrid modes, non-inductive steady-state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours per day but also in involving the world-wide fusion communities and in promoting scientific competition among the Parties. (author)

  10. ITER conceptual design report

    International Nuclear Information System (INIS)

    1991-01-01

    Results of the International Thermonuclear Experimental Reactor (ITER) Conceptual Design Activity (CDA) are reported. This report covers the Terms of Reference for the project: defining the technical specifications, defining future research needs, define site requirements, and carrying out a coordinated research effort coincident with the CDA. Refs, figs and tabs

  11. US ITER Management Plan

    International Nuclear Information System (INIS)

    1991-12-01

    This US ITER Management Plan is the plan for conducting the Engineering Design Activities within the US. The plan applies to all design, analyses, and associated physics and technology research and development (R ampersand D) required to support the program. The plan defines the management considerations associated with these activities. The plan also defines the management controls that the project participants will follow to establish, implement, monitor, and report these activities. The activities are to be conducted by the project in accordance with this plan. The plan will be updated to reflect the then-current management approach required to meet the project objectives. The plan will be reviewed at least annually for possible revision. Section 2 presents the ITER objectives, a brief description of the ITER concept as developed during the Conceptual Design Activities, and comments on the Engineering Design Activities. Section 3 discusses the planned international organization for the Engineering Design Activities, from which the tasks will flow to the US Home Team. Section 4 describes the US ITER management organization and responsibilities during the Engineering Design Activities. Section 5 describes the project management and control to be used to perform the assigned tasks during the Engineering Design Activities. Section 6 presents the references. Several appendices are provided that contain detailed information related to the front material

  12. Iterative List Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree...

  13. ITER at Cadarache

    International Nuclear Information System (INIS)

    2005-06-01

    This public information document presents the ITER project (International Thermonuclear Experimental Reactor), the definition of the fusion, the international cooperation and the advantages of the project. It presents also the site of Cadarache, an appropriate scientifical and economical environment. The last part of the documentation recalls the historical aspect of the project and the today mobilization of all partners. (A.L.B.)

  14. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  15. Status of ITER

    International Nuclear Information System (INIS)

    Aymar, R.

    2002-01-01

    At the end of engineering design activities (EDA) in July 2001, all the essential elements became available to make a decision on construction of ITER. A sufficiently detailed and integrated engineering design now exists for a generic site, has been assessed for feasibility, and costed, and essential physics and technology R and D has been carried out to underpin the design choices. Formal negotiations have now begun between the current participants--Canada, Euratom, Japan, and the Russian Federation--on a Joint Implementation Agreement for ITER which also establishes the legal entity to run ITER. These negotiations are supported on technical aspects by Coordinated Technical Activities (CTA), which maintain the integrity of the project, for the good of all participants, and concentrate on preparing for procurement by industry of the longest lead items, and for formal application for a construction license with the host country. This paper highlights the main features of the ITER design. With cryogenically-cooled magnets close to neutron-generating plasma, the design of shielding with adequate access via port plugs for auxiliaries such as heating and diagnostics, and of remote replacement and refurbishing systems for in-vessel components, are particularly interesting nuclear technology challenges. Making a safety case for ITER to satisfy potential regulators and to demonstrate, as far as possible at this stage, the environmental attractiveness of fusion as an energy source, is also important. The paper gives illustrative details on this work, and an update on the progress of technical preparations for construction, as well as the status of the above negotiations

  16. A Simple Insight into Iterative Belief Propagation's Success

    OpenAIRE

    Dechter, Rina; Mateescu, Robert

    2012-01-01

    In Non - ergodic belief networks the posterior belief OF many queries given evidence may become zero.The paper shows that WHEN belief propagation IS applied iteratively OVER arbitrary networks(the so called, iterative OR loopy belief propagation(IBP)) it IS identical TO an arc - consistency algorithm relative TO zero - belief queries(namely assessing zero posterior probabilities). This implies that zero - belief conclusions derived BY belief propagation converge AND are sound.More importantly...

  17. An Iterative Brinkman penalization for particle vortex methods

    DEFF Research Database (Denmark)

    Walther, Jens Honore; Hejlesen, Mads Mølholm; Leonard, A.

    2013-01-01

    We present an iterative Brinkman penalization method for the enforcement of the no-slip boundary condition in vortex particle methods. This is achieved by implementing a penalization of the velocity field using iteration of the penalized vorticity. We show that using the conventional Brinkman...... condition. These are: the impulsively started flow past a cylinder, the impulsively started flow normal to a flat plate, and the uniformly accelerated flow normal to a flat plate. The iterative penalization algorithm is shown to give significantly improved results compared to the conventional penalization...

  18. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    reevaluating existing algorithms with respect to this new architecture. This is of particular interest to large-scale constrained optimization problems with real-time requirements. The aim of this study is to investigate dierent methods for solving large-scale optimization problems with focus...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...

  19. ITER EDA status

    International Nuclear Information System (INIS)

    Aymar, R.

    2001-01-01

    The Project has focused on drafting the Plant Description Document (PDD), which will be published as the Technical Basis for the ITER Final Design Report (FDR), and its related documentation in time for the ITER review process. The preparations have involved continued intensive detailed design work, analyses and assessments by the Home Teams and the Joint Central Team, who have co-operated closely and efficiently. The main technical document has been completed in time for circulation, as planned, to TAC members for their review at TAC-17 (19-22 February 2001). Some of the supporting documents, such as the Plant Design Specification (PDS), Design Requirements and Guidelines (DRG1 and DRG2), and the Plant Safety Requirement (PSR) are also available for reference in draft form. A summary paper of the PDD for the Council's information is available as a separate document. A new documentation structure for the Project has been established. This hierarchical structure for documentation facilitates the entire organization in a way that allows better change control and avoids duplications. The initiative was intended to make this documentation system valid for the construction and operation phases of ITER. As requested, the Director and the JCT have been assisting the Explorations to plan for future joint technical activities during the Negotiations, and to consider technical issues important for ITER construction and operation for their introduction in the draft of a future joint implementation agreement. As charged by the Explorers, the Director has held discussions with the Home Team Leaders in order to prepare for the staffing of the International Team and Participants Teams during the Negotiations (Co-ordinated Technical Activities, CTA) and also in view of informing all ITER staff about their future directions in a timely fashion. One important element of the work was the completion by the Parties' industries of costing studies of about 83 ''procurement packages

  20. Resource Usage Protocols for Iterators

    NARCIS (Netherlands)

    Huisman, Marieke; Haack, C.; Müller, P.; Hurlin, C.

    We discuss usage protocols for iterator objects that prevent concurrent modifications of the underlying collection while iterators are in progress. We formalize these protocols in Java-like object interfaces, enriched with separation logic contracts. We present examples of iterator clients and

  1. ITER CTA newsletter. No. 4

    International Nuclear Information System (INIS)

    2001-12-01

    This ITER CTA Newsletter contains information about the organization of the ITER Co-ordinated Technical Activities (CTA) International Team as the follow-up of the ITER CTA project board meeting in Toronto on 7 November 2001. It also includes a summary on the start of the international tokamak physics activity by Dr. D. Campbell, Chair of the ITPA Co-ordinating Committee

  2. Status of the ITER EDA

    International Nuclear Information System (INIS)

    Aymar, R.

    2000-01-01

    This article summarizes progress made in the ITER Engineering Design Activities in the period between the ITER Meeting in Tokyo (January 2000) and June 2000. Topics: Termination of EDA, Joint Central Team and Support, Task Assignments, ITER Physics, Urgent and High Priority Physics Research Areas

  3. Iterative supervirtual refraction interferometry

    KAUST Repository

    Al-Hagan, Ola

    2014-05-02

    In refraction tomography, the low signal-to-noise ratio (S/N) can be a major obstacle in picking the first-break arrivals at the far-offset receivers. To increase the S/N, we evaluated iterative supervirtual refraction interferometry (ISVI), which is an extension of the supervirtual refraction interferometry method. In this method, supervirtual traces are computed and then iteratively reused to generate supervirtual traces with a higher S/N. Our empirical results with both synthetic and field data revealed that ISVI can significantly boost up the S/N of far-offset traces. The drawback is that using refraction events from more than one refractor can introduce unacceptable artifacts into the final traveltime versus offset curve. This problem can be avoided by careful windowing of refraction events.

  4. ITER technical basis

    International Nuclear Information System (INIS)

    2002-01-01

    Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties

  5. Iterative participatory design

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2010-01-01

    The theoretical background in this chapter is information systems development in an organizational context. This includes theories from participatory design, human-computer interaction, and ethnographically inspired studies of work practices. The concept of design is defined as an experimental...... iterative process of mutual learning by designers and domain experts (users), who aim to change the users’ work practices through the introduction of information systems. We provide an illustrative case example with an ethnographic study of clinicians experimenting with a new electronic patient record...... system, focussing on emergent and opportunity-based change enabled by appropriating the system into real work. The contribution to a general core of design research is a reconstruction of the iterative prototyping approach into a general model for sustained participatory design....

  6. Conformable variational iteration method

    Directory of Open Access Journals (Sweden)

    Omer Acan

    2017-02-01

    Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.

  7. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  8. ITER concept definition. V.2

    International Nuclear Information System (INIS)

    1989-01-01

    Volume II of the two volumes describing the concept definition of the International Thermonuclear Experimental Reactor deals with the ITER concept in technical depth, and covers all areas of design of the ITER tokamak. Included are an assessment of the current database for design, scoping studies, rationale for concepts selection, performance flexibility, the ITER concept, the operations and experimental/testing program, ITER parameters and design phase schedule, and research and development specific to ITER. This latter includes a definition of specific research and development tasks, a division of tasks among members, specific milestones, required results, and schedules. Figs and tabs

  9. ITER CTA newsletter. No. 10

    International Nuclear Information System (INIS)

    2002-07-01

    This ITER CTA newsletter issue comprises the ITER backgrounder, which was approved as an official document by the participants in the Negotiations on the ITER Implementation agreement at their fourth meeting, held in Cadarache from 4-6 June 2002, and information about two ITER meetings: one is the third meeting of the ITER parties' designated Safety Representatives, which took place in Cadarache, France from 6-7 June 2002, and the other is the second meeting of the International Tokamak Physics Activity (ITPA) topical group on diagnostics, which was held at General Atomics, San Diego, USA, from 4-8 March 2002

  10. ITER EDA newsletter. V. 7, no. 7

    International Nuclear Information System (INIS)

    1998-07-01

    This newsletter contains the articles: 'Extraordinary ITER council meeting', 'ITER EDA final safety meeting' and 'Summary report of the 3rd combined workshop of the ITER confinement and transport and ITER confinement database and modeling expert groups'

  11. ITER EDA newsletter. V. 10, special issue

    International Nuclear Information System (INIS)

    2001-07-01

    This ITER EDA Newsletter includes summaries of the reports of ITER EDA JCT Physics unit about ITER physics R and D during the Engineering Design Activities (EDA), ITER EDA JCT Naka JWC ITER technology R and D during the EDA, and Safety, Environment and Health group of ITER EDA JCT, Garching JWS on EDA activities related to safety

  12. GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vinent [Karlsruhe Inst. of Technology (KIT) (Germany)

    2011-12-14

    In hardware-aware high performance computing, block- asynchronous iteration and mixed precision iterative refinement are two techniques that are applied to leverage the computing power of SIMD accelerators like GPUs. Although they use a very different approach for this purpose, they share the basic idea of compensating the convergence behaviour of an inferior numerical algorithm by a more efficient usage of the provided computing power. In this paper, we want to analyze the potential of combining both techniques. Therefore, we implement a mixed precision iterative refinement algorithm using a block-asynchronous iteration as an error correction solver, and compare its performance with a pure implementation of a block-asynchronous iteration and an iterative refinement method using double precision for the error correction solver. For matrices from theUniversity of FloridaMatrix collection,we report the convergence behaviour and provide the total solver runtime using different GPU architectures.

  13. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  14. A new iterative speech enhancement scheme based on Kalman filtering

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, Søren Vang

    2005-01-01

    A new iterative speech enhancement scheme that can be seen as an approximation to the Expectation-Maximization (EM) algorithm is proposed. The algorithm employs a Kalman filter that models the excitation source as a spectrally white process with a rapidly time-varying variance, which calls...... for a high temporal resolution estimation of this variance. A Local Variance Estimator based on a Prediction Error Kalman Filter is designed for this high temporal resolution variance estimation. To achieve fast convergence and avoid local maxima of the likelihood function, a Weighted Power Spectral...... Subtraction filter is introduced as an initialization procedure. Iterations are then made sequential inter-frame, exploiting the fact that the AR model changes slowly between neighboring frames. The proposed algorithm is computationally more efficient than a baseline EM algorithm due to its fast convergence...

  15. Image segmentation by iterative parallel region growing and splitting

    Science.gov (United States)

    Tilton, James C.

    1989-01-01

    The spatially constrained clustering (SCC) iterative parallel region-growing technique is applied to image analysis. The SCC algorithm is implemented on the massively parallel processor at NASA Goddard. Most previous region-growing approaches have the drawback that the segmentation produced depends on the order in which portions of the image are processed. The ideal solution to this problem (merging only the single most similar pair of spatially adjacent regions in the image in each iteration) becomes impractical except for very small images, even on a massively parallel computer. The SCC algorithm overcomes these problems by performing, in parallel, the best merge within each of a set of local, possibly overlapping, subimages. A region-splitting stage is also incorporated into the algorithm, but experiments show that region splitting generally does not improve segmentation results. The SCC algorithm has been tested on various imagery data, and test results for a Landsat TM image are summarized.

  16. Radiation dose reduction using 100-kVp and a sinogram-affirmed iterative reconstruction algorithm in adolescent head CT: Impact on grey-white matter contrast and image noise.

    Science.gov (United States)

    Nagayama, Yasunori; Nakaura, Takeshi; Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro; Yuki, Hideaki; Hirarta, Kenichiro; Kidoh, Masafumi; Oda, Seitaro; Utsunomiya, Daisuke; Yamashita, Yasuyuki

    2017-07-01

    To retrospectively evaluate the image quality and radiation dose of 100-kVp scans with sinogram-affirmed iterative reconstruction (IR) for unenhanced head CT in adolescents. Sixty-nine patients aged 12-17 years underwent head CT under 120- (n = 34) or 100-kVp (n = 35) protocols. The 120-kVp images were reconstructed with filtered back-projection (FBP), 100-kVp images with FBP (100-kVp-F) and sinogram-affirmed IR (100-kVp-S). We compared the effective dose (ED), grey-white matter (GM-WM) contrast, image noise, and contrast-to-noise ratio (CNR) between protocols in supratentorial (ST) and posterior fossa (PS). We also assessed GM-WM contrast, image noise, sharpness, artifacts, and overall image quality on a four-point scale. ED was 46% lower with 100- than 120-kVp (p noise was lower, on 100-kVp-S than 120-kVp at ST (p noise in adolescent head CT. • 100-kVp head CT provides 46% radiation dose reduction compared with 120-kVp. • 100-kVp scanning improves subjective and objective GM-WM contrast. • Sinogram-affirmed IR decreases head CT image noise, especially in supratentorial region. • 100-kVp protocol with sinogram-affirmed IR is suited for adolescent head CT.

  17. Value of 100 kVp scan with sinogram-affirmed iterative reconstruction algorithm on a single-source CT system during whole-body CT for radiation and contrast medium dose reduction: an intra-individual feasibility study.

    Science.gov (United States)

    Nagayama, Y; Nakaura, T; Oda, S; Tsuji, A; Urata, J; Furusawa, M; Tanoue, S; Utsunomiya, D; Yamashita, Y

    2018-02-01

    To perform an intra-individual investigation of the usefulness of a contrast medium (CM) and radiation dose-reduction protocol using single-source computed tomography (CT) combined with 100 kVp and sinogram-affirmed iterative reconstruction (SAFIRE) for whole-body CT (WBCT; chest-abdomen-pelvis CT) in oncology patients. Forty-three oncology patients who had undergone WBCT under both 120 and 100 kVp protocols at different time points (mean interscan intervals: 98 days) were included retrospectively. The CM doses for the 120 and 100 kVp protocols were 600 and 480 mg iodine/kg, respectively; 120 kVp images were reconstructed with filtered back-projection (FBP), whereas 100 kVp images were reconstructed with FBP (100 kVp-F) and the SAFIRE (100 kVp-S). The size-specific dose estimate (SSDE), iodine load and image quality of each protocol were compared. The SSDE and iodine load of 100 kVp protocol were 34% and 21%, respectively, lower than of 120 kVp protocol (SSDE: 10.6±1.1 versus 16.1±1.8 mGy; iodine load: 24.8±4versus 31.5±5.5 g iodine, pquality. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  18. Iterative reconstruction of a region of interest for transmission tomography.

    Science.gov (United States)

    Ziegler, Andy; Nielsen, Tim; Grass, Michael

    2008-04-01

    It was shown that images reconstructed for transmission tomography with iterative maximum likelihood (ML) algorithms exhibit a higher signal-to-noise ratio than images reconstructed with filtered back-projection type algorithms. However, a drawback of ML reconstruction in particular and iterative reconstruction in general is the requirement that the reconstructed field of view (FOV) has to cover the whole volume that contributes to the absorption. In the case of a high resolution reconstruction, this demands a huge number of voxels. This article shows how an iterative ML reconstruction can be limited to a region of interest (ROI) without losing the advantages of a ML reconstruction. Compared with a full FOV ML reconstruction, the reconstruction speed is mainly increased by reducing the number of voxels which are necessary for a ROI reconstruction. In addition, the speed of convergence is increased.

  19. An Iterated Tabu Search Approach for the Clique Partitioning Problem

    Directory of Open Access Journals (Sweden)

    Gintaras Palubeckis

    2014-01-01

    all cliques induced by the subsets is as small as possible. We develop an iterated tabu search (ITS algorithm for solving this problem. The proposed algorithm incorporates tabu search, local search, and solution perturbation procedures. We report computational results on CPP instances of size up to 2000 vertices. Performance comparisons of ITS against state-of-the-art methods from the literature demonstrate the competitiveness of our approach.

  20. Simulation-based algorithms for Markov decision processes

    CERN Document Server

    Chang, Hyeong Soo; Fu, Michael C; Marcus, Steven I

    2013-01-01

    Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel ...

  1. Tritium module for ITER/Tiber system code

    International Nuclear Information System (INIS)

    Finn, P.A.; Willms, S.; Busigin, A.; Kalyanam, K.M.

    1988-01-01

    A tritium module was developed for the ITER/Tiber system code to provide information on capital costs, tritium inventory, power requirements and building volumes for these systems. In the tritium module, the main tritium subsystems/emdash/plasma processing, atmospheric cleanup, water cleanup, blanket processing/emdash/are each represented by simple scaleable algorithms. 6 refs., 2 tabs

  2. A novel iterative energy calibration method for composite germanium detectors

    International Nuclear Information System (INIS)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.

    2004-01-01

    An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam γ-ray spectroscopy experiment is presented

  3. A novel iterative energy calibration method for composite germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in

    2004-07-01

    An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam {gamma}-ray spectroscopy experiment is presented.

  4. Motion compensated iterative reconstruction for cardiac X-ray tomography

    NARCIS (Netherlands)

    A.A. Isola (Alfonso)

    2010-01-01

    textabstractWithin this Ph.D. project, three-dimensional reconstruction methods for moving objects (with a focus on the human heart) from cone-beam X-ray projections using iterative reconstruction algorithms were developed and evaluated. This project was carried in collaboration with the Digital

  5. Radiation dose reduction using 100-kVp and a sinogram-affirmed iterative reconstruction algorithm in adolescent head CT: Impact on grey-white matter contrast and image noise

    Energy Technology Data Exchange (ETDEWEB)

    Nagayama, Yasunori [Kumamoto City Hospital, Department of Radiology, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Nakaura, Takeshi; Yuki, Hideaki; Hirarta, Kenichiro; Kidoh, Masafumi; Oda, Seitaro; Utsunomiya, Daisuke; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro [Kumamoto City Hospital, Department of Radiology, Kumamoto (Japan)

    2017-07-15

    To retrospectively evaluate the image quality and radiation dose of 100-kVp scans with sinogram-affirmed iterative reconstruction (IR) for unenhanced head CT in adolescents. Sixty-nine patients aged 12-17 years underwent head CT under 120- (n = 34) or 100-kVp (n = 35) protocols. The 120-kVp images were reconstructed with filtered back-projection (FBP), 100-kVp images with FBP (100-kVp-F) and sinogram-affirmed IR (100-kVp-S). We compared the effective dose (ED), grey-white matter (GM-WM) contrast, image noise, and contrast-to-noise ratio (CNR) between protocols in supratentorial (ST) and posterior fossa (PS). We also assessed GM-WM contrast, image noise, sharpness, artifacts, and overall image quality on a four-point scale. ED was 46% lower with 100- than 120-kVp (p < 0.001). GM-WM contrast was higher, and image noise was lower, on 100-kVp-S than 120-kVp at ST (p < 0.001). CNR of 100-kVp-S was higher than of 120-kVp (p < 0.001). GM-WM contrast of 100-kVp-S was subjectively rated as better than of 120-kVp (p < 0.001). There were no significant differences in the other criteria between 100-kVp-S and 120-kVp (p = 0.072-0.966). The 100-kVp with sinogram-affirmed IR facilitated dramatic radiation reduction and better GM-WM contrast without increasing image noise in adolescent head CT. (orig.)

  6. Toward Generalization of Iterative Small Molecule Synthesis.

    Science.gov (United States)

    Lehmann, Jonathan W; Blair, Daniel J; Burke, Martin D

    2018-02-01

    Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the "building block approach", i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach.

  7. ITER CTA newsletter. No. 13, October 2002

    International Nuclear Information System (INIS)

    2002-11-01

    This ITER CTA newsletter issue comprises concise information about an ITER related meeting concerning the joint implementation of ITER - the fifth ITER Negotiations Meeting - which was held in Toronto, Canada, 19-20 September, 2002, and information about assessment of the possible ITER site in Clarington, Ontario, Canada, which was the subject of the first official stage of the Joint Assessment of Specific Sites (JASS) for the ITER Project. This assessment was completed just before the Fifth ITER Negotiations Meeting

  8. ITER cooling systems

    International Nuclear Information System (INIS)

    Natalizio, A.; Hollies, R.E.; Sochaski, R.O.; Stubley, P.H.

    1992-06-01

    The ITER reference system uses low-temperature water for heat removal and high-temperature helium for bake-out. As these systems share common equipment, bake-out cannot be performed until the cooling system is drained and dried, and the reactor cannot be started until the helium has been purged from the cooling system. This study examines the feasibility of using a single high-temperature fluid to perform both heat removal and bake-out. The high temperature required for bake-out would also be in the range for power production. The study examines cost, operational benefits, and impact on reactor safety of two options: a high-pressure water system, and a low-pressure organic system. It was concluded that the cost savings and operational benefits are significant; there are no significant adverse safety impacts from operating either the water system or the organic system; and the capital costs of both systems are comparable

  9. Iterated crowdsourcing dilemma game

    Science.gov (United States)

    Oishi, Koji; Cebrian, Manuel; Abeliuk, Andres; Masuda, Naoki

    2014-02-01

    The Internet has enabled the emergence of collective problem solving, also known as crowdsourcing, as a viable option for solving complex tasks. However, the openness of crowdsourcing presents a challenge because solutions obtained by it can be sabotaged, stolen, and manipulated at a low cost for the attacker. We extend a previously proposed crowdsourcing dilemma game to an iterated game to address this question. We enumerate pure evolutionarily stable strategies within the class of so-called reactive strategies, i.e., those depending on the last action of the opponent. Among the 4096 possible reactive strategies, we find 16 strategies each of which is stable in some parameter regions. Repeated encounters of the players can improve social welfare when the damage inflicted by an attack and the cost of attack are both small. Under the current framework, repeated interactions do not really ameliorate the crowdsourcing dilemma in a majority of the parameter space.

  10. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  11. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  12. ITER-FEAT - outline design report. Report by the ITER Director. ITER meeting, Tokyo, January 2000

    International Nuclear Information System (INIS)

    2001-01-01

    It is now possible to define the key elements of ITER-FEAT. This report provides the results, to date, of the joint work of the Special Working Group in the form of an Outline Design Report on the ITER-FEAT design which, subject to the views of ITER Council and of the Parties, will be the focus of further detailed design work and analysis in order to provide to the Parties a complete and fully integrated engineering design within the framework of the ITER EDA extension

  13. Reshaping skills policy in South Africa: structures, policies and ...

    African Journals Online (AJOL)

    Reshaping skills policy in South Africa: structures, policies and processes. ... New Agenda: South African Journal of Social and Economic Policy ... South African skills development policy since the promulgation of the Skills Development Act of 1998 has undergone a number of different iterations or attempts at accelerating ...

  14. ITER CTA newsletter. No. 7

    International Nuclear Information System (INIS)

    2002-04-01

    This issue of ITER CTA newsletter contains information about the meeting of the ITER CTA project board, which took place in Moscow, Russian Federation on 22 April 2002 on the occasion of the Third Negotiators Meeting (N3), and about the meeting 'EU divertor celebration day' organized on 16 January 2002 at Plansee AG, Reutte, Austria

  15. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)

    2017-10-15

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  16. Iterative reconstruction of volumetric particle distribution

    International Nuclear Information System (INIS)

    Wieneke, Bernhard

    2013-01-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data. (paper)

  17. Asymmetric iterative blind deconvolution of multiframe images

    Science.gov (United States)

    Biggs, David S. C.; Andrews, Mark

    1998-10-01

    Imaging through a stochastically varying distorting medium, such as a turbulent atmosphere, requires multiple short-exposure frames to ensure maximum resolution of object features. Restoration methods are used to extract the common underlying object from the speckle images, and blind deconvolution techniques are required as typically there is little prior information available about either the image or individual PSFs. A method is presented for multiframe restoration based on iterative blind deconvolution, which alternates between restoring the image and PSF estimates. A maximum-likelihood approach is employed via the Richardson-Lucy (RL) method which automatically ensures positively and conservation of the total number of photons. The restoration is accelerated by applying a vector sequence is treated as a 3D volume of data and processed to produce a 3D stack of PSFs and a single 2D image of the object. The problem of convergence to an undesirable solution, such as a delta function, is addressed by weighting the number of image or PSF iterations according to how quickly each is converging, this leads to the asymmetrical nature of the algorithm. Noise artifacts are suppressed by using a dampened RL algorithm to prevent over fitting of the corrupted data. Results are presented for real single frame and simulated multiframe speckle imaging.

  18. Plasma control concepts for ITER

    International Nuclear Information System (INIS)

    Lister, J.B.; Nieswand, C.

    1997-01-01

    This overview paper skims over a wide range of issues related to the control of ITER plasmas. Although operation of the ITER project will require extensive developmental work to achieve the degree of control required, there is no indication that any of the identified problems will present overwhelming difficulties compared with the operation of present tokamaks. However, the precision of control required and the degree of automation of the final ITER plasma control system will present a challenge which is somewhat greater than for present tokamaks. In order to operate ITER optimally, integrated use of a large amount of diagnostic information will be necessary, evaluated and interpreted automatically. This will challenge both the diagnostics themselves and their supporting interpretation codes. The intervening years will provide us with the opportunity to implement and evaluate most of the new features required for ITER on existing tokamaks, with the exception of the control of an ignited plasma. (author) 7 figs., 7 refs

  19. ITER EDA Newsletter. V. 3, no. 8

    International Nuclear Information System (INIS)

    1994-08-01

    This ITER EDA (Engineering Design Activities) Newsletter issue reports on the sixth ITER council meeting; introduces the newly appointed ITER director and reports on his address to the ITER council. The vacuum tank for the ITER model coil testing, installed at JAERI, Naka, Japan is also briefly described

  20. ITER interim design report package documents

    International Nuclear Information System (INIS)

    1996-01-01

    This publication contains the Excerpt from the ITER Council (IC-8), the ITER Interim Design Report, Cost Review and Safety Analysis, ITER Site Requirements and ITER Site Design Assumptions and the Excerpt from the ITER Council (IC-9). 8 figs, 2 tabs

  1. ITER ITA newsletter. No. 8, September 2003

    International Nuclear Information System (INIS)

    2003-10-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities including Robert Aymar's leaving ITER for CERN, ITER related issues at the IAEA General Conference and status and prospects of thermonuclear power and activity during the ITA on materials foe vessel and in-vessel components

  2. Algorithm and Indicators of Evaluation of Agrarian Policy Efficiency in the Sphere of Food Security of the Region

    Directory of Open Access Journals (Sweden)

    Elena Nikolaevna Antamoshkina

    2015-12-01

    Full Text Available The article substantiates the author’s method of analysis of the regional food security and its approbation on the example of the Southern Federal District. The author’s goal was to develop a comprehensive universal method of analysis of the regional food security. To achieve this goal the following steps were required: to develop a system of indicators to measure food security at the regional level; to define criteria for assessing regional food security; to synthesize model analysis and evaluation of food security in the region. The paper presents an algorithm for the phased application of methodology for assessing regional food security. The recommended indicators and criteria for assessing regional food security are consistent with the parameters defined by the Doctrine of food security in Russia, and take into account the requirements of the WTO. The proposed method was tested on data from the largest regions of the Southern Federal District, which allowed a comparison of the level of food security in the regions of the spatial and temporal perspective. Comparison was made on the basis of an integrated assessment of the level of food security in the regions. The theoretical significance consists in the meaningful complement to research models and tools for analysis of food security at the regional level of the economy in terms of justification of indicators and criteria for assessing food security in the region in terms of Russia’s participation in the WTO. The practical significance is in developing and testing the proposed method based on an assessment of indicators of food security in SFD.

  3. NITSOL: A Newton iterative solver for nonlinear systems

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)

    1996-12-31

    Newton iterative methods, also known as truncated Newton methods, are implementations of Newton`s method in which the linear systems that characterize Newton steps are solved approximately using iterative linear algebra methods. Here, we outline a well-developed Newton iterative algorithm together with a Fortran implementation called NITSOL. The basic algorithm is an inexact Newton method globalized by backtracking, in which each initial trial step is determined by applying an iterative linear solver until an inexact Newton criterion is satisfied. In the implementation, the user can specify inexact Newton criteria in several ways and select an iterative linear solver from among several popular {open_quotes}transpose-free{close_quotes} Krylov subspace methods. Jacobian-vector products used by the Krylov solver can be either evaluated analytically with a user-supplied routine or approximated using finite differences of function values. A flexible interface permits a wide variety of preconditioning strategies and allows the user to define a preconditioner and optionally update it periodically. We give details of these and other features and demonstrate the performance of the implementation on a representative set of test problems.

  4. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  5. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  6. Algorithmic properties of the midpoint predictor-corrector time integrator.

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J.; Love, Edward; Scovazzi, Guglielmo

    2009-03-01

    Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.

  7. Acceleration of iterative tomographic reconstruction using graphics processors

    International Nuclear Information System (INIS)

    Belzunce, M.A.; Osorio, A.; Verrastro, C.A.

    2009-01-01

    Using iterative algorithms for image reconstruction in 3 D Positron Emission Tomography has shown to produce images with better quality than analytical methods. How ever, these algorithms are computationally expensive. New Graphic Processor Units (GPU) provides high performance at low cost and also programming tools that make possible to execute parallel algorithms easily in scientific applications. In this work, we try to achieve an acceleration of image reconstruction algorithms in 3 D PET by using a GPU. A parallel implementation of the algorithm ML-EM 3 D was developed using Siddon algorithm as Projector and Back-projector. Results show that accelerations of more than one order of magnitude can be achieved, keeping similar image quality. (author)

  8. Maximum Likelihood-Based Iterated Divided Difference Filter for Nonlinear Systems from Discrete Noisy Measurements

    Science.gov (United States)

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate. PMID:23012525

  9. Distributed 3-D iterative reconstruction for quantitative SPECT

    International Nuclear Information System (INIS)

    Ju, Z.W.; Frey, E.C.; Tsui, B.M.W.

    1995-01-01

    The authors describe a distributed three dimensional (3-D) iterative reconstruction library for quantitative single-photon emission computed tomography (SPECT). This library includes 3-D projector-backprojector pairs (PBPs) and distributed 3-D iterative reconstruction algorithms. The 3-D PBPs accurately and efficiently model various combinations of the image degrading factors including attenuation, detector response and scatter response. These PBPs were validated by comparing projection data computed using the projectors with that from direct Monte Carlo (MC) simulations. The distributed 3-D iterative algorithms spread the projection-backprojection operations for all the projection angles over a heterogeneous network of single or multi-processor computers to reduce the reconstruction time. Based on a master/slave paradigm, these distributed algorithms provide dynamic load balancing and fault tolerance. The distributed algorithms were verified by comparing images reconstructed using both the distributed and non-distributed algorithms. Computation times for distributed 3-D reconstructions running on up to 4 identical processors were reduced by a factor approximately 80--90% times the number of the processors participating, compared to those for non-distributed 3-D reconstructions running on a single processor. When combined with faster affordable computers, this library provides an efficient means for implementing accurate reconstruction and compensation methods to improve quality and quantitative accuracy in SPECT images

  10. Coordinated Active Power Dispatch for a Microgrid via Distributed Lambda Iteration

    DEFF Research Database (Denmark)

    Hu, Jianqiang; Z. Q. Chen, Michael; Cao, Jinde

    2017-01-01

    A novel distributed optimal dispatch algorithm is proposed for coordinating the operation of multiple micro units in a microgrid, which has incorporated the distributed consensus algorithm in multi-agent systems and the -iteration optimization algorithm in economic dispatch of power systems...... problem. On the other hand, the proposed optimization algorithm can either be used for off-line calculation or be utilized for on-line operation and has the ability to survive single-point failures and shows good robustness in the iteration process. Numerical studies in a seven bus microgrid demonstrate...

  11. PARALLEL ITERATIVE RECONSTRUCTION OF PHANTOM CATPHAN ON EXPERIMENTAL DATA

    Directory of Open Access Journals (Sweden)

    M. A. Mirzavand

    2016-01-01

    Full Text Available The principles of fast parallel iterative algorithms based on the use of graphics accelerators and OpenGL library are considered in the paper. The proposed approach provides simultaneous minimization of the residuals of the desired solution and total variation of the reconstructed three- dimensional image. The number of necessary input data, i. e. conical X-ray projections, can be reduced several times. It means in a corresponding number of times the possibility to reduce radiation exposure to the patient. At the same time maintain the necessary contrast and spatial resolution of threedimensional image of the patient. Heuristic iterative algorithm can be used as an alternative to the well-known three-dimensional Feldkamp algorithm.

  12. K-means Clustering: Lloyd's algorithm

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. K-means Clustering: Lloyd's algorithm. Refines clusters iteratively. Cluster points using Voronoi partitioning of the centers; Centroids of the clusters determine the new centers. Bad example k = 3, n =4.

  13. Inexact Bregman iteration with an application to Poisson data reconstruction

    Science.gov (United States)

    Benfenati, A.; Ruggiero, V.

    2013-06-01

    This work deals with the solution of image restoration problems by an iterative regularization method based on the Bregman iteration. Any iteration of this scheme requires the exact computation of the minimizer of a function. However, in some image reconstruction applications, it is either impossible or extremely expensive to obtain exact solutions of these subproblems. In this paper, we propose an inexact version of the iterative procedure, where the inexactness in the inner subproblem solution is controlled by a criterion that preserves the convergence of the Bregman iteration and its features in image restoration problems. In particular, the method allows us to obtain accurate reconstructions also when only an overestimation of the regularization parameter is known. The introduction of the inexactness in the iterative scheme allows us to address image reconstruction problems from data corrupted by Poisson noise, exploiting the recent advances about specialized algorithms for the numerical minimization of the generalized Kullback-Leibler divergence combined with a regularization term. The results of several numerical experiments enable us to evaluate the proposed scheme for image deblurring or denoising in the presence of Poisson noise.

  14. The ITER cryostat

    International Nuclear Information System (INIS)

    Bourque, R.F.; Wykes, M.E.P.

    1995-01-01

    The ITER cryostat is the vacuum chamber containing the tokamak reactor. Its functions are (1) to provide a high vacuum environment to limit thermal loads to the superconducting magnet system by gas conduction and convection; (2) to be part of the second radioactivity confinement boundary; and (3) provide passive removal of decay heat for beyond design basis accidents. A separate thermal shield along the inside wall limits thermal radiation to the coils. An external concrete shield provides radiological protection. The cryostat consists of a cylindrical section bolted to torispherical heads at top and bottom. The vessel is made up of two concentric walls connected by horizontal and vertical ribs. The space between the walls can be filled with helium gas at slightly above one atmosphere for thermal coupling of the two walls, to block inbound air microleaks, and for leak detection. The cryostat has many penetrations, some as large as four meters diameter, providing various types of access from the outside to the tokamak. These include heat transport system cooling pipes, cryogenic feeds, auxiliary heating, diagnostics, and blanket and divertor removal ports. Large bellows are used between the cryostat and the tokamak to accommodate differential thermal expansion

  15. ITER plasma facing components

    International Nuclear Information System (INIS)

    Kuroda, T.; Vieider, G.; Akiba, M.

    1991-01-01

    This document summarizes results of the Conceptual Design Activities (1988-1990) for the International Thermonuclear Experimental Reactor (ITER) project, namely those that pertain to the plasma facing components of the reactor vessel, of which the main components are the first wall and the divertor plates. After an introduction and an executive summary, the principal functions of the plasma-facing components are delineated, i.e., (i) define the low-impurity region within which the plasma is produced, (ii) absorb the electromagnetic radiation and charged-particle flux from the plasma, and (iii) protect the blanket/shield components from the plasma. A list of critical design issues for the divertor plates and the first wall is given, followed by discussions of the divertor plate design (including the issues of material selection, erosion lifetime, design concepts, thermal and mechanical analysis, operating limits and overall lifetime, tritium inventory, baking and conditioning, safety analysis, manufacture and testing, and advanced divertor concepts) and the first wall design (armor material and design, erosion lifetime, overall design concepts, thermal and mechanical analysis, lifetime and operating limits, tritium inventory, baking and conditioning, safety analysis, manufacture and testing, an alternative first wall design, and the limiters used instead of the divertor plates during start-up). Refs, figs and tabs

  16. ITER cooling system

    International Nuclear Information System (INIS)

    Kveton, O.K.

    1990-11-01

    The present specification of the ITER cooling system does not permit its operation with water above 150 C. However, the first wall needs to be heated to higher temperatures during conditioning at 250 C and bake-out at 350 C. In order to use the cooling water for these operations the cooling system would have to operate during conditioning at 37 Bar and during bake-out at 164 Bar. This is undesirable from the safety analysis point of view, and alternative heating methods are to be found. This review suggests that superheated steam or gas heating can be used for both baking and conditioning. The blanket design must consider the use of dual heat transfer media, allowing for change from one to another in both directions. Transfer from water to gas or steam is the most intricate and risky part of the entire heating process. Superheated steam conditioning appears unfavorable. The use of inert gas is recommended, although alternative heating fluids such as organic coolant should be investigated

  17. ITER EDA newsletter. V. 9, no. 2

    International Nuclear Information System (INIS)

    2000-02-01

    This ITER EDA Newsletter reports on the seventh ITER technical meeting on safety and environment and contains the executive summary of the eleventh ITER scrape-off layer and divertor physics expert group meeting. Individual abstracts have been prepared

  18. ITER EDA newsletter. V. 7, no. 6

    International Nuclear Information System (INIS)

    1998-06-01

    This newsletter contains the articles: 'ITER representation at the 11th Pacific Basin Nuclear Conference', 'Summary of discussion points and further deliberations in the special committee on the ITER project in the Atomic Energy Commission', and 'ITER radio frequency systems'

  19. ITER safety challenges and opportunities

    International Nuclear Information System (INIS)

    Piet, S.J.

    1992-01-01

    This paper reports on results of the Conceptual Design Activity (CDA) for the International Thermonuclear Experimental Reactor (ITER) suggest challenges and opportunities. ITER is capable of meeting anticipated regulatory dose limits, but proof is difficult because of large radioactive inventories needing stringent radioactivity confinement. Much research and development (R ampersand D) and design analysis is needed to establish that ITER meets regulatory requirements. There is a further oportunity to do more to prove more of fusion's potential safety and environmental advantages and maximize the amount of ITER technology on the path toward fusion power plants. To fulfill these tasks, three programmatic challenges and three technical challenges must be overcome. The first step is to fund a comprehensive safety and environmental ITER R ampersand D plan. Second is to strengthen safety and environment work and personnel in the international team. Third is to establish an external consultant group to advise the ITER Joint Team on designing ITER to meet safety requirements for siting by any of the Parties. The first of three key technical challenges is plasma engineering - burn control, plasma shutdown, disruptions, tritium burn fraction, and steady state operation. The second is the divertor, including tritium inventory, activation hazards, chemical reactions, and coolant disturbances. The third technical challenge is optimization of design requirements considering safety risk, technical risk, and cost

  20. Sparse electromagnetic imaging using nonlinear iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla

    2015-04-13

    A sparse nonlinear electromagnetic imaging scheme is proposed for reconstructing dielectric contrast of investigation domains from measured fields. The proposed approach constructs the optimization problem by introducing the sparsity constraint to the data misfit between the scattered fields expressed as a nonlinear function of the contrast and the measured fields and solves it using the nonlinear iterative shrinkage thresholding algorithm. The thresholding is applied to the result of every nonlinear Landweber iteration to enforce the sparsity constraint. Numerical results demonstrate the accuracy and efficiency of the proposed method in reconstructing sparse dielectric profiles.

  1. Normalization of the collage regions of iterated function systems

    Science.gov (United States)

    Zhang, Zhengbing; Zhang, Wei

    2012-11-01

    Fractal graphics, generated with iterated function systems (IFS), have been applied in broad areas. Since the collage regions of different IFS may be different, it is difficult to respectively show the attractors of iterated function systems in a same region on a computer screen using one program without modifying the display parameters. An algorithm is proposed in this paper to solve this problem. A set of transforms are repeatedly applied to modify the coefficients of the IFS so that the collage region of the resulted IFS changes toward the unit square. Experimental results demonstrate that the collage region of any IFS can be normalized to the unit square with the proposed method.

  2. ITER EDA Newsletter. V. 10, no. 7

    International Nuclear Information System (INIS)

    2001-07-01

    This ITER EDA Newsletter presents an overview of meetings held at IAEA Headquarters in Vienna during the week 16-20 July 2001 related to the successful completion of the ITER Engineering Design Activities (EDA). Among them were the final meeting of the ITER Council, the closing ceremony to commemorate the EDA completion, the final meeting of the ITER Management Advisory Committee, a briefing of issues related to ITER developments, and discussions on the possible joint implementation of ITER

  3. Remote maintenance development for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Tada, Eisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Shibanuma, Kiyoshi

    1998-04-01

    This paper describes the overall ITER remote maintenance design concept developed mainly for in-vessel components such as diverters and blankets, and outlines the ITER R and D program to develop remote handling equipment and radiation hard components. Reactor structures inside the ITER cryostat must be maintained remotely due to DT operation, making remote handling technology basic to reactor design. The overall maintenance scenario and design concepts have been developed, and maintenance design feasibility, including fabrication and testing of full-scale in-vessel remote maintenance handling equipment and tool, is being verified. (author)

  4. Iterative nonlinear unfolding code: TWOGO

    International Nuclear Information System (INIS)

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  5. Shading correction assisted iterative cone-beam CT reconstruction

    Science.gov (United States)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low

  6. Error bounds from extra precise iterative refinement

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  7. ITER containment structures

    International Nuclear Information System (INIS)

    Sadakov, S.; Fauser, F.; Nelson, B.

    1991-01-01

    This document describes the results and recommendations of the Containment Structures Design Unit (CSDU) on the containment structures for ITER, made in the context of the Conceptual Design Phase. The document describes the following subsystems: (1) the primary vacuum vessel (VV), (2) the attaching locks (AL) of the invessel components, (3) the plasma passive and active stabilizers, (4) the cryostat vessel, and (5) the machine gravity supports. Although for most components reference designs were selected, for some of these alternative design options were described, because unresolved problems necessitate further research and development. Conclusions and future needs are summarized for each of the above subsystems: (1) a reference VV design was selected, while most critical VV future needs are the feasibility studies of manufacturing, assembly, and the repair/disassembly/reassembly by remote handling. Alternative, thin-wall options appear attractive and should be studied further during the Engineering Design Activities; (2) no reference design solution was selected for the AL system, as AL design requirements are extremely difficult and internally contradictory, while there is no existing tokamak precedent, but instead, five different approaches will be further researched early in the Engineering Design Phase; (3) significant progress is reported on passive loops, for which the ''twin-loops'' concept is ready to be advanced into the Engineering Design Phase, and on active coils, where a new coil positioning prevents interference with the blanket removal paths, and the current joints are located in a secondary vacuum or in the atmosphere of the reactor hall, repairable by remote handling; (4) a full metallic welded cryostat design with increased toroidal resistance was chosen, but with a design based on concrete with a thin inner metallic liner as a back-up in case detailed nuclear shielding requirements would force the cryostat to act as biological shield; (5) out

  8. Cooperation between CERN and ITER

    CERN Multimedia

    2008-01-01

    CERN and the International Fusion Organisation ITER have just signed a first cooperation agreeement. Kaname Ikeda, the Director-General of the International Fusion Energy Organisation (ITER) (on the right) and Robert Aymar, Director-General of CERN, signing the agreement.The Director-General of the International Fusion Energy Organization, Mr Kaname Ikeda, and CERN Director-General, Robert Aymar, signed a cooperation agreement at a meeting on the Meyrin site on Thursday 6 March. One of the main purposes of this agreement is for CERN to give ITER the benefit of its experience in the field of technology as well as in administrative domains such as finance, procurement, human resources and informatics through the provision of consultancy services. Currently in its start-up phase at its Cadarache site, 70 km from Marseilles (France), ITER will focus its research on the scientific and technical feasibility of using fusion energy as a fu...

  9. ITER Conceptual design: Interim report

    International Nuclear Information System (INIS)

    1990-01-01

    This interim report describes the results of the International Thermonuclear Experimental Reactor (ITER) Conceptual Design Activities after the first year of design following the selection of the ITER concept in the autumn of 1988. Using the concept definition as the basis for conceptual design, the Design Phase has been underway since October 1988, and will be completed at the end of 1990, at which time a final report will be issued. This interim report includes an executive summary of ITER activities, a description of the ITER device and facility, an operation and research program summary, and a description of the physics and engineering design bases. Included are preliminary cost estimates and schedule for completion of the project

  10. Low Complexity V-BLAST MIMO-OFDM Detector by Successive Iterations Reduction

    Directory of Open Access Journals (Sweden)

    AHMED, K.

    2015-02-01

    Full Text Available V-BLAST detection method suffers large computational complexity due to its successive detection of symbols. In this paper, we propose a modified V-BLAST algorithm to decrease the computational complexity by reducing the number of detection iterations required in MIMO communication systems. We begin by showing the existence of a maximum number of iterations, beyond which, no significant improvement is obtained. We establish a criterion for the number of maximum effective iterations. We propose a modified algorithm that uses the measured SNR to dynamically set the number of iterations to achieve an acceptable bit-error rate. Then, we replace the feedback algorithm with an approximate linear function to reduce the complexity. Simulations show that significant reduction in computational complexity is achieved compared to the ordinary V-BLAST, while maintaining a good BER performance.

  11. ITER leader to head CERN

    CERN Document Server

    Feder, Toni

    2003-01-01

    After successfully chairing an external review committee for CERN last year, Robert Aymar will leave ITER to become director general of the European particle physics laboratory rom 2004. Before ITER he also successfully managed the startup or Tore Supra. He will attempt to ensure that the LHC begins operating in 2007 - two years late - and is paid for by 2010 and will also start the planning for life after the LHC (1 page)

  12. The ITER reduced cost design

    International Nuclear Information System (INIS)

    Aymar, R.

    2000-01-01

    Six years of joint work under the international thermonuclear experimental reactor (ITER) EDA agreement yielded a mature design for ITER which met the objectives set for it (ITER final design report (FDR)), together with a corpus of scientific and technological data, large/full scale models or prototypes of key components/systems and progress in understanding which both validated the specific design and are generally applicable to a next step, reactor-oriented tokamak on the road to the development of fusion as an energy source. In response to requests from the parties to explore the scope for addressing ITER's programmatic objective at reduced cost, the study of options for cost reduction has been the main feature of ITER work since summer 1998, using the advances in physics and technology databases, understandings, and tools arising out of the ITER collaboration to date. A joint concept improvement task force drawn from the joint central team and home teams has overseen and co-ordinated studies of the key issues in physics and technology which control the possibility of reducing the overall investment and simultaneously achieving the required objectives. The aim of this task force is to achieve common understandings of these issues and their consequences so as to inform and to influence the best cost-benefit choice, which will attract consensus between the ITER partners. A report to be submitted to the parties by the end of 1999 will present key elements of a specific design of minimum capital investment, with a target cost saving of about 50% the cost of the ITER FDR design, and a restricted number of design variants. Outline conclusions from the work of the task force are presented in terms of physics, operations, and design of the main tokamak systems. Possible implications for the way forward are discussed

  13. ITER diagnostic system: Vacuum interface

    International Nuclear Information System (INIS)

    Patel, K.M.; Udintsev, V.S.; Hughes, S.; Walker, C.I.; Andrew, P.; Barnsley, R.; Bertalot, L.; Drevon, J.M.; Encheva, A.; Kashchuk, Y.; Maquet, Ph.; Pearce, R.; Taylor, N.; Vayakis, G.; Walsh, M.J.

    2013-01-01

    Diagnostics play an essential role for the successful operation of the ITER tokamak. They provide the means to observe control and to measure plasma during the operation of ITER tokamak. The components of the diagnostic system in the ITER tokamak will be installed in the vacuum vessel, in the cryostat, in the upper, equatorial and divertor ports, in the divertor cassettes and racks, as well as in various buildings. Diagnostic components that are placed in a high radiation environment are expected to operate for the life of ITER. There are approx. 45 diagnostic systems located on ITER. Some diagnostics incorporate direct or independently pumped extensions to maintain their necessary vacuum conditions. They require a base pressure less than 10 −7 Pa, irrespective of plasma operation, and a leak rate of less than 10 −10 Pa m 3 s −1 . In all the cases it is essential to maintain the ITER closed fuel cycle. These directly coupled diagnostic systems are an integral part of the ITER vacuum containment and are therefore subject to the same design requirements for tritium and active gas confinement, for all normal and accidental conditions. All the diagnostics, whether or not pumped, incorporate penetration of the vacuum boundary (i.e. window assembly, vacuum feedthrough etc.) and demountable joints. Monitored guard volumes are provided for all elements of the vacuum boundary that are judged to be vulnerable by virtue of their construction, material, load specification etc. Standard arrangements are made for their construction and for the monitoring, evacuating and leak testing of these volumes. Diagnostic systems are incorporated at more than 20 ports on ITER. This paper will describe typical and particular arrangements of pumped diagnostic and monitored guard volume. The status of the diagnostic vacuum systems, which are at the start of their detailed design, will be outlined and the specific features of the vacuum systems in ports and extensions will be described

  14. Effect of Inspection Policies and Residual Value of Collected Used Products: A Mathematical Model and Genetic Algorithm for a Closed-Loop Green Manufacturing System

    Directory of Open Access Journals (Sweden)

    Byung Duk Song

    2017-09-01

    Full Text Available In the green manufacturing system that pursues the reuse of used products, the residual value of collected used products (CUP hugely affects a variety of managerial decisions to construct profitable and environmental remanufacturing plans. This paper deals with a closed-loop green manufacturing system for companies which perform both manufacturing with raw materials and remanufacturing with collected used products (CUP. The amount of CUP is assumed as a function of buy-back cost while the quality level of CUP, which means the residual value, follows a known distribution. In addition, the remanufacturing cost can differ according to the quality of the CUP. Moreover, nowadays companies are subject to existing environment-related laws such as Extended Producer Responsibility (EPR. Therefore, a company should collect more used products than its obligatory take-back quota or face fines from the government for not meeting its quota. Through the development of mathematical models, two kinds of inspection policies are examined to validate the efficiency of two different operation processes. To find a managerial solution, a genetic algorithm is proposed and tested with numerical examples.

  15. The ITER remote maintenance system

    International Nuclear Information System (INIS)

    Tesini, A.

    2007-01-01

    Full text of publication follows: ITER is a joint international research and development project that aims to demonstrate the scientific and technological feasibility of fusion power. As soon as the plasma operation begins using tritium, the replacement of the vacuum vessel internal components will need to be done with remote handling techniques. To accomplish these operations ITER has equipped itself with a Remote Maintenance System; this includes the Remote Handling equipment set and the Hot Cell facility. Both need to work in a cooperative way, with the aim of minimizing the machine shutdown periods and to maximize the machine availability. The ITER Remote Handling equipment set is required to be available, robust, reliable and retrievable. The machine components, to be remotely handle-able, are required to be designed simply so as to ease their maintenance. The baseline ITER Remote Handling equipment is described. The ITER Hot Cell Facility is required to provide a controlled and shielded area for the execution of repair operations (carried out using dedicated remote handling equipment) on those activated components which need to be returned to service, inside the vacuum vessel. The Hot Cell provides also the equipment and space for the processing and temporary storage of the operational and decommissioning rad-waste. A conceptual ITER Hot Cell Facility is described. (authors)

  16. ITER concept definition. V.1

    International Nuclear Information System (INIS)

    1989-01-01

    Under the auspices of the International Atomic Energy Agency (IAEA), an agreement among the four parties representing the world's major fusion programs resulted in a program for conceptual design of the next logical step in the fusion program, the International Thermonuclear Experimental Reactor (ITER). The definition phase, which ended in November, 1989, is summarized in two reports: a brief summary is contained in the ITER Definition Phase Report (IAEA/ITER/DS/2); the extended technical summary and technical details of ITER are contained in this two-volume report. The first volume of this report contains the Introduction and Summary, and the remainder will appear in Volume II. In the Conceptual Design Activities phase, ITER has been defined as being a tokamak device. The basic performance parameters of ITER are given in Volume I of this report. In addition, the rationale for selection of this concept, the performance flexibility, technical issues, operations, safety, reliability, cost, and research and development needed to proceed with the design are discussed. Figs and tabs

  17. The ITER remote maintenance system

    International Nuclear Information System (INIS)

    Tesini, A.; Palmer, J.

    2007-01-01

    ITER is a joint international research and development project that aims to demonstrate the scientific and technological feasibility of fusion power. As soon as the plasma operation begins using tritium, the replacement of the vacuum vessel internal components will need to be done with remote handling techniques. To accomplish these operations ITER has equipped itself with a Remote Maintenance System; this includes the Remote Handling equipment set and the Hot Cell facility. Both need to work in a cooperative way, with the aim of minimizing the machine shutdown periods and to maximize the machine availability. The ITER Remote Handling equipment set is required to be available, robust, reliable and retrievable. The machine components, to be remotely handle-able, are required to be designed simply so as to ease their maintenance. The baseline ITER Remote Handling equipment is described. The ITER Hot Cell Facility is required to provide a controlled and shielded area for the execution of repair operations (carried out using dedicated remote handling equipment) on those activated components which need to be returned to service, inside the vacuum vessel. The Hot Cell provides also the equipment and space for the processing and temporary storage of the operational and decommissioning radwaste. A conceptual ITER Hot Cell Facility is described. (orig.)

  18. Iterative interferometry-based method for picking microseismic events

    Science.gov (United States)

    Iqbal, Naveed; Al-Shuhail, Abdullatif A.; Kaka, SanLinn I.; Liu, Entao; Raj, Anupama Govinda; McClellan, James H.

    2017-05-01

    Continuous microseismic monitoring of hydraulic fracturing is commonly used in many engineering, environmental, mining, and petroleum applications. Microseismic signals recorded at the surface, suffer from excessive noise that complicates first-break picking and subsequent data processing and analysis. This study presents a new first-break picking algorithm that employs concepts from seismic interferometry and time-frequency (TF) analysis. The algorithm first uses a TF plot to manually pick a reference first-break and then iterates the steps of cross-correlation, alignment, and stacking to enhance the signal-to-noise ratio of the relative first breaks. The reference first-break is subsequently used to calculate final first breaks from the relative ones. Testing on synthetic and real data sets at high levels of additive noise shows that the algorithm enhances the first-break picking considerably. Furthermore, results show that only two iterations are needed to converge to the true first breaks. Indeed, iterating more can have detrimental effects on the algorithm due to increasing correlation of random noise.

  19. Adaptive ILC with an adaptive iterative learnign gain

    International Nuclear Information System (INIS)

    Ashraf, S.; Muhammad, E.

    2008-01-01

    This paper describes the design of an adaptive ILC (Iterative Learning Controller) with an iterative learning gain. The basic idea behind ILC is that the information obtained from one trial can be used to improve the control input for the next trial. This proposed scheme extends the idea further and suggests that the information obtained from one trial could also be used to improve control algorithm parameters (gain matrices). The scheme converges faster than the conventional ILC. This convergence and hence number of iterations has always been an issue with ILC. This scheme because of its simple mathematical structure can easily be implemented with lower memory and simpler hardware as opposed to other such adaptive schemes which are computationally expensive. (author)

  20. ITER Central Solenoid Module Fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Smith, John [General Atomics, San Diego, CA (United States)

    2016-09-23

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort between the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first

  1. ITER EDA newsletter. V. 5, no. 9

    International Nuclear Information System (INIS)

    1996-09-01

    This issue of the Newsletter on the Engineering Design Activities (EDA) for the ITER project contains an overview of one of the seven large ITER Research and Development Projects identified by the ITER Director, namely the Vacuum Vessel Sector, as well as an account of computer animation created for ITER

  2. ITER EDA newsletter. V. 4, no. 9

    International Nuclear Information System (INIS)

    1995-09-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter contains reports on the first meeting of the ITER Test Blanket Working Group held 19-21 July 1995 at the ITER Garching Joint Work Site, and on the second workshop of the ITER Expert Group on Confinement and Transport

  3. ITER ITA newsletter. No. 22, May 2005

    International Nuclear Information System (INIS)

    2005-06-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about Japanese Participant Team's recent activities in the ITER Transitional Arrangements(ITA) phase and ITER related meeting the Fourth IAEA Technical Meeting (IAEA-TM) on Negative Ion Based Neutral Beam Injectors which was held in Padova, Italy from 9-11 May 2005

  4. ITER EDA newsletter. V. 10, no. 6

    International Nuclear Information System (INIS)

    2001-06-01

    This ITER EDA Newsletter issue includes information about the ITER Management Advisory Committee Meeting held in Vienna on 16 July 2001 and also a summary of the ninth ITER Technical Meeting on safety and environment held at the ITER Garching Joint Work site, 8 to 10 May, 2001

  5. ITER EDA Newsletter. V. 4, no. 5

    International Nuclear Information System (INIS)

    1995-05-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter contains comments on the ITER project by the Permanent Representative of the Russian Federation to the International Organizations in Vienna; a report on the ITER Magnet Technical Meeting held at the Joint Work Site at Naka, Japan, April 19-21, 1995; and a contribution entitled ''ITER spouses cross the cultures''

  6. ITER EDA newsletter. V. 8, no. 9

    International Nuclear Information System (INIS)

    1999-09-01

    This edition of the ITER EDA Newsletter contains a contribution by the ITER Director, R. Aymar, on the subject of developments in ITER Physics R and D report on the completion of the ITER central solenoid model coils installation by H. Tsuji, Head fo the Superconducting Magnet Laboratory at JAERI in Naka, Japan. Individual abstracts are prepared for each of the two articles

  7. ITER EDA newsletter. V. 7, no. 1

    International Nuclear Information System (INIS)

    1998-01-01

    This issue of the ITER Newsletter contains a summary report on the Thirteenth meeting of the ITER Management Advisory Committee (MAC), a report on ITER at the International Conference on Fusion Reactor Materials and a report of a Russian scientist working at ITER Garching JWS

  8. ITER ITA newsletter. No. 5, June 2003

    International Nuclear Information System (INIS)

    2003-08-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities, one of them retirement of Dr. Michel Huguet, deputy director of the ITER central team and the Head of Naka joint work site and another about 10.5 years of his activities at this site

  9. ITER EDA Newsletter. V.3, no.3

    International Nuclear Information System (INIS)

    1994-03-01

    This ITER EDA Newsletter issue contains reports on (i) the completion of the ITER EDA Protocol 1, (ii) the signing of ITER EDA Protocol 2, (iii) a technical meeting on pumping and fuelling and (iv) a technical meeting on the ITER Tritium Plant

  10. ITER EDA newsletter. V. 8, no. 12

    International Nuclear Information System (INIS)

    1999-12-01

    This ITER EDA Newsletter reports about the ITER Management Advisory Committee Meeting in Naka, the ITER Technical Advisory Committee Meeting in Naka and the meeting of the ITER SWG-P2 in Vienna. A separate abstract is prepared for each meeting

  11. Diverse Power Iteration Embeddings and Its Applications

    Energy Technology Data Exchange (ETDEWEB)

    Huang H.; Yoo S.; Yu, D.; Qin, H.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detection and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.

  12. ITER safety challenges and opportunities

    International Nuclear Information System (INIS)

    Piet, S.J.

    1991-01-01

    Results of the Conceptual Design Activity (CDA) for the International Thermonuclear Experimental Reactor (ITER) suggest challenges and opportunities. ''ITER is capable of meeting anticipated regulatory dose limits,'' but proof is difficult because of large radioactive inventories needing stringent radioactivity confinement. We need much research and development (R ampersand D) and design analysis to establish that ITER meets regulatory requirements. We have a further opportunity to do more to prove more of fusion's potential safety and environmental advantages and maximize the amount of ITER technology on the path toward fusion power plants. To fulfill these tasks, we need to overcome three programmatic challenges and three technical challenges. The first programmatic challenge is to fund a comprehensive safety and environmental ITER R ampersand D plan. Second is to strengthen safety and environment work and personnel in the international team. Third is to establish an external consultant group to advise the ITER Joint Team on designing ITER to meet safety requirements for siting by any of the Parties. The first of the three key technical challenges is plasma engineering -- burn control, plasma shutdown, disruptions, tritium burn fraction, and steady state operation. The second is the divertor, including tritium inventory, activation hazards, chemical reactions, and coolant disturbances. The third technical challenge is optimization of design requirements considering safety risk, technical risk, and cost. Some design requirements are now too strict; some are too lax. Fuel cycle design requirements are presently too strict, mandating inappropriate T separation from H and D. Heat sink requirements are presently too lax; they should be strengthened to ensure that maximum loss of coolant accident temperatures drop

  13. Lensless microscope based on iterative in-line holographic reconstruction

    Science.gov (United States)

    Wu, Jigang

    2014-11-01

    We propose a lensless microscopic imaging technique based on iteration algorithm with known constraint for image reconstruction in digital in-line holography. In our method, we introduce a constraint on the sample plane as known part in the lensless microscopy for iteration algorithm in order to eliminate the twin-image effect of holography and thus lead to better performance on microscopic imaging. We evaluate our method by numerical simulation and built a prototype in-line holographic imaging system and demonstrated its capability by preliminary experiments. In our proposed setup, a carefully designed photomask used to hold the sample is under illumination of a coherent light source. The in-line hologram is then recorded by a CMOS sensor. In the reconstruction, the known information of the illumination beam and the photomask is used as constraints in the iteration process. The improvement of image quality because of suppression of twin-images can be clearly seen by comparing the images obtained by direct holographic reconstruction and our iterative method.

  14. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    Science.gov (United States)

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  15. Iterative CT reconstruction via minimizing adaptively reweighted total variation.

    Science.gov (United States)

    Zhu, Lei; Niu, Tianye; Petrongolo, Michael

    2014-01-01

    Iterative reconstruction via total variation (TV) minimization has demonstrated great successes in accurate CT imaging from under-sampled projections. When projections are further reduced, over-smoothing artifacts appear in the current reconstruction especially around the structure boundaries. We propose a practical algorithm to improve TV-minimization based CT reconstruction on very few projection data. Based on the theory of compressed sensing, the L-0 norm approach is more desirable to further reduce the projection views. To overcome the computational difficulty of the non-convex optimization of the L-0 norm, we implement an adaptive weighting scheme to approximate the solution via a series of TV minimizations for practical use in CT reconstruction. The weight on TV is initialized as uniform ones, and is automatically changed based on the gradient of the reconstructed image from the previous iteration. The iteration stops when a small difference between the weighted TV values is observed on two consecutive reconstructed images. We evaluate the proposed algorithm on both a digital phantom and a physical phantom. Using 20 equiangular projections, our method reduces reconstruction errors in the conventional TV minimization by a factor of more than 5, with improved spatial resolution. By adaptively reweighting TV in iterative CT reconstruction, we successfully further reduce the projection number for the same or better image quality.

  16. Impact of model-based iterative reconstruction on image quality of contrast-enhanced neck CT.

    Science.gov (United States)

    Gaddikeri, S; Andre, J B; Benjert, J; Hippe, D S; Anzai, Y

    2015-02-01

    Improved image quality is clinically desired for contrast-enhanced CT of the neck. We compared 30% adaptive statistical iterative reconstruction and model-based iterative reconstruction algorithms for the assessment of image quality of contrast-enhanced CT of the neck. Neck contrast-enhanced CT data from 64 consecutive patients were reconstructed retrospectively by using 30% adaptive statistical iterative reconstruction and model-based iterative reconstruction. Objective image quality was assessed by comparing SNR, contrast-to-noise ratio, and background noise at levels 1 (mandible) and 2 (superior mediastinum). Two independent blinded readers subjectively graded the image quality on a scale of 1-5, (grade 5 = excellent image quality without artifacts and grade 1 = nondiagnostic image quality with significant artifacts). The percentage of agreement and disagreement between the 2 readers was assessed. Compared with 30% adaptive statistical iterative reconstruction, model-based iterative reconstruction significantly improved the SNR and contrast-to-noise ratio at levels 1 and 2. Model-based iterative reconstruction also decreased background noise at level 1 (P = .016), though there was no difference at level 2 (P = .61). Model-based iterative reconstruction was scored higher than 30% adaptive statistical iterative reconstruction by both reviewers at the nasopharynx (P quality (P model-based iterative reconstruction. Model-based iterative reconstruction offers improved subjective and objective image quality as evidenced by a higher SNR and contrast-to-noise ratio and lower background noise within the same dataset for contrast-enhanced neck CT. Model-based iterative reconstruction has the potential to reduce the radiation dose while maintaining the image quality, with a minor downside being prominent artifacts related to thyroid shield use on model-based iterative reconstruction. © 2015 by American Journal of Neuroradiology.

  17. Optimal PMU placement using Iterated Local Search

    Energy Technology Data Exchange (ETDEWEB)

    Hurtgen, M.; Maun, J.-C. [Universite Libre de Bruxelles, Avenue F. Roosevelt 50, B-1050 Brussels (Belgium)

    2010-10-15

    An essential tool for power system monitoring is state estimation. Using PMUs can greatly improve the state estimation process. However, for state estimation, the PMUs should be placed appropriately in the network. The problem of optimal PMU placement for full observability is analysed in this paper. The objective of the paper is to minimise the size of the PMU configuration while allowing full observability of the network. The method proposed initially suggests a PMU distribution which makes the network observable. The Iterated Local Search (ILS) metaheuristic is then used to minimise the size of the PMU configuration needed to observe the network. The algorithm is tested on IEEE test networks with 14, 57 and 118 nodes and compared to the results obtained in previous publications. (author)

  18. Robot Calibration Using Iteration and Differential Kinematics

    Science.gov (United States)

    Ye, S. H.; Wang, Y.; Ren, Y. J.; Li, D. K.

    2006-10-01

    In the applications of seam laser tracking welding robot and general measuring robot station based on stereo vision, the robot calibration is the most difficult step during the whole system calibration progress. Many calibration methods were put forward, but the exact location of base frame has to be known no matter which method was employed. However, the accurate base frame location is hard to be known. In order to obtain the position of base coordinate, this paper presents a novel iterative algorithm which can also get parameters' deviations at the same time. It was a method of employing differential kinematics to solve link parameters' deviations and approaching real values step-by-step. In the end, experiment validation was provided.

  19. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  20. Clearance potential of ITER vacuum vessel activated materials

    International Nuclear Information System (INIS)

    Cepraga, D.G.; Cambi, G.; Frisoni, M.

    2002-01-01

    To demonstrate fusion's environmental attractiveness over the entire life cycle, a waste analysis is mandatory. The clearance is recommended by IAEA for releasing activated solid materials from regulatory control and for waste management policy. The paper focuses on the approach used to support waste analyses for ITER Generic Site Safety Report. The Material Unconditional Clearance Index of all the materials/zones on the equatorial mid-plane of ITER machine have been evaluated, based on IAEA-TECDOC-855. The Bonami-Nitawl-XSDNRPM sequence of the Scale-4.4a code system (using Vitenea-J library) has been firstly used for radiation transport analyses. Then the Anita-2000 code package is used for the activation calculation. The paper presents also, as an example, an application of the clearance indexes estimation for the ITER vacuum vessel materials. The results of the Anita-2000 have been compared with those obtained using the Fispact-99 activation code. (author)

  1. An iterative learning controller for nonholonomic mobile robots

    International Nuclear Information System (INIS)

    Oriolo, G.; Panzieri, S.; Ulivi, G.

    1998-01-01

    The authors present an iterative learning controller that applies to nonholonomic mobile robots, as well as other systems that can be put in chained form. The learning algorithm exploits the fact that chained-form. The learning algorithm exploits the fact that chained-form systems are linear under piecewise-constant inputs. The proposed control scheme requires the execution of a small number of experiments to drive the system to the desired state in finite time, with nice convergence and robustness properties with respect to modeling inaccuracies as well as disturbances. To avoid the necessity of exactly reinitializing the system at each iteration, the basic method is modified so as to obtain a cyclic controller, by which the system is cyclically steered through an arbitrary sequence of states. As a case study, a carlike mobile robot is considered. Both simulation and experimental results are reported to show the performance of the method

  2. Iterative Reconfigurable Tree Search Detection of MIMO Systems

    Directory of Open Access Journals (Sweden)

    Hanwen Luo

    2007-01-01

    Full Text Available This paper is concerned with reduced-complexity detection, referred to as iterative reconfigurable tree search (IRTS detection, with application in iterative receivers for multiple-input multiple-output (MIMO systems. Instead of the optimum maximum a posteriori probability detector, which performs brute force search over all possible transmitted symbol vectors, the new scheme evaluates only the symbol vectors that contribute significantly to the soft output of the detector. The IRTS algorithm is facilitated by carrying out the search on a reconfigurable tree, constructed by computing the reliabilities of symbols based on minimum mean-square error (MMSE criterion and reordering the symbols according to their reliabilities. Results from computer simulations are presented, which proves the good performance of IRTS algorithm over a quasistatic Rayleigh channel even for relatively small list sizes.

  3. Iterative methods for simultaneous inclusion of polynomial zeros

    CERN Document Server

    Petković, Miodrag

    1989-01-01

    The simultaneous inclusion of polynomial complex zeros is a crucial problem in numerical analysis. Rapidly converging algorithms are presented in these notes, including convergence analysis in terms of circular regions, and in complex arithmetic. Parallel circular iterations, where the approximations to the zeros have the form of circular regions containing these zeros, are efficient because they also provide error estimates. There are at present no book publications on this topic and one of the aims of this book is to collect most of the algorithms produced in the last 15 years. To decrease the high computational cost of interval methods, several effective iterative processes for the simultaneous inclusion of polynomial zeros which combine the efficiency of ordinary floating-point arithmetic with the accuracy control that may be obtained by the interval methods, are set down, and their computational efficiency is described. The rate of these methods is of interest in designing a package for the simultaneous ...

  4. Iterative procedures for wave propagation in the frequency domain

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seongjai [Rice Univ., Houston, TX (United States); Symes, W.W.

    1996-12-31

    A parallelizable two-grid iterative algorithm incorporating a domain decomposition (DD) method is considered for solving the Helmholtz problem. Since a numerical method requires choosing at least 6 to 8 grid points per wavelength, the coarse-grid problem itself is not an easy task for high frequency applications. We solve the coarse-grid problem using a nonoverlapping DD method. To accelerate the convergence of the iteration, an artificial damping technique and relaxation parameters are introduced. Automatic strategies for finding efficient parameters are discussed. Numerical results are presented to show the effectiveness of the method. It is numerically verified that the rate of convergence of the algorithm depends on the wave number sub-linearly and does not deteriorate as the mesh size decreases.

  5. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  6. Fusion Power measurement at ITER

    International Nuclear Information System (INIS)

    Bertalot, L.; Barnsley, R.; Krasilnikov, V.; Stott, P.; Suarez, A.; Vayakis, G.; Walsh, M.

    2015-01-01

    Nuclear fusion research aims to provide energy for the future in a sustainable way and the ITER project scope is to demonstrate the feasibility of nuclear fusion energy. ITER is a nuclear experimental reactor based on a large scale fusion plasma (tokamak type) device generating Deuterium - Tritium (DT) fusion reactions with emission of 14 MeV neutrons producing up to 700 MW fusion power. The measurement of fusion power, i.e. total neutron emissivity, will play an important role for achieving ITER goals, in particular the fusion gain factor Q related to the reactor performance. Particular attention is given also to the development of the neutron calibration strategy whose main scope is to achieve the required accuracy of 10% for the measurement of fusion power. Neutron Flux Monitors located in diagnostic ports and inside the vacuum vessel will measure ITER total neutron emissivity, expected to range from 1014 n/s in Deuterium - Deuterium (DD) plasmas up to almost 10 21 n/s in DT plasmas. The neutron detection systems as well all other ITER diagnostics have to withstand high nuclear radiation and electromagnetic fields as well ultrahigh vacuum and thermal loads. (authors)

  7. ITER safety and licensing update

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Neill, E-mail: neill.taylor@iter.org [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Ciattaglia, Sergio; Cortes, Pierre; Iseli, Markus; Rosanvallon, Sandrine; Topilski, Leonid [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The ITER preliminary safety report has been submitted to the French nuclear regulator. Black-Right-Pointing-Pointer Safety analyses have shown how the key safety functions will be achieved. Black-Right-Pointing-Pointer The design contains multiple provisions for the confinement of radioactive material. Black-Right-Pointing-Pointer Analyses have addressed external hazards (e.g. earthquake) and loss of all power. - Abstract: Safety files were submitted by the ITER Organization to the French nuclear safety authorities in March 2010 as a part of the licensing process. These included the preliminary safety report (RPrS) which presents the extensive safety analyses performed for ITER. The report has been the subject of examination by the authorities and their advisors, and discussions with them have been held on many topics. In the light of this process, this paper discusses some of the topics that remain prominent in the safety analysis of ITER. In particular, the provision of the two safety functions, confinement of radioactive material and limitation of exposure to radiation, is explained and some of the potential challenges to them are identified. Amongst these are the risks of fire and explosion, and external events such as earthquake and loss of all electric power. Provisions in the ITER design, together with the characteristics of fusion, ensure that a very good safety performance will be achieved.

  8. ITER ITA newsletter. No. 1, February 2003

    International Nuclear Information System (INIS)

    2003-04-01

    This first issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related meetings including eighth ITER Negotiations meeting, held on 18-19 February, 2003 in St. Petersburg, Russia, first meeting of the ITER preparatory committee, held on 17 February, 2003 in St. Petersburg, Russia and the third meeting of the ITPA (International Tokamak Physics Activity) coordinating committee, held on 24-25 October 2002 at the Max-Planck-Institut fuer Plasmaphysik, Garching

  9. ITER ITA newsletter. Special issue - December 2006

    International Nuclear Information System (INIS)

    2006-12-01

    This issue of ITER ITA (ITER transitional arrangements) newsletter contains information about signing ITER Agreement, which took place on 21 November 2006 in Paris, France. It was great day for fusion research as Ministers from the seven ITER Parties in the presence of President Jacques Chirac and President of European Commission Jose Barroso and some 400 invited guests signed the Agreement setting up the ITER International Fusion Energy Organization. This issues contains the speeches, statements and remarks of Presidents and Ministers

  10. ITER ITA newsletter. No. 11, December 2003

    International Nuclear Information System (INIS)

    2003-12-01

    This issue of the ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER including information from the editor about ITER update, about progress in ITER magnet design and preparation of procurement packages and about 25th anniversary of the First Steering Committee Meeting of the International Tokamak Reactor (INTOR) Workshop, organized under the auspices of the IAEA, took place at the IAEA Headquarters in Vienna

  11. ITER EDA newsletter. V. 5, no. 7

    International Nuclear Information System (INIS)

    1996-07-01

    This issue of the Newsletter on the Engineering Design Activities (EDA) for the ITER Tokamak project contains a report on the Tenth ITER Council Meeting, held July 24-25, 1996, in St. Petersburg, Russia; a description of the Status of the ITER EDA by the ITER Director, Dr. R. Aymar; and a report on the so-called Task Number One by the ITER Special Working Group (Basis for the Start of Explorations, presenting possible scenarios toward siting, licensing and host support)

  12. ITER ITA newsletter No. 33, August-September-October 2006

    International Nuclear Information System (INIS)

    2006-11-01

    This issue of ITER ITA (ITER transitional arrangements) newsletter contains concise information about ITER related events such as public debate on ITER in Provence and fiftieth annual General Conference of the IAEA. Eight ITER related statements were made during Conference

  13. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip

    2017-06-23

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  14. Reinforcement learning produces dominant strategies for the Iterated Prisoner's Dilemma.

    Science.gov (United States)

    Harper, Marc; Knight, Vincent; Jones, Martin; Koutsovoulos, Georgios; Glynatsi, Nikoleta E; Campbell, Owen

    2017-01-01

    We present tournament results and several powerful strategies for the Iterated Prisoner's Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also.

  15. A novel block cryptosystem based on iterating a chaotic map

    International Nuclear Information System (INIS)

    Xiang Tao; Liao Xiaofeng; Tang Guoping; Chen Yong; Wong, Kwok-wo

    2006-01-01

    A block cryptographic scheme based on iterating a chaotic map is proposed. With random binary sequences generated from the real-valued chaotic map, the plaintext block is permuted by a key-dependent shift approach and then encrypted by the classical chaotic masking technique. Simulation results show that performance and security of the proposed cryptographic scheme are better than those of existing algorithms. Advantages and security of our scheme are also discussed in detail

  16. Establishment of ITER: Relevant documents

    International Nuclear Information System (INIS)

    1988-01-01

    At the Geneva Summit Meeting in November, 1985, a proposal was made by the Soviet Union to build a next-generation tokamak experiment on a collaborative basis involving the world's four major fusion blocks. In October, 1986, after consulting with Japan and the European Community, the United States responded with a proposal on how to implement such an activity. Ensuing diplomatic and technical discussions resulted in the establishment, under the auspices of the IAEA, of the International Thermonuclear Experimental Reactor Conceptual Design Activities. This tome represents a collection of all documents relating to the establishment of ITER, beginning with the initial meeting of the ITER Quadripartite Initiative Committee in Vienna on 15-16 March, 1987, through the meeting of the Provisional ITER Council, also in Vienna, on 8-9 February, 1988

  17. Remote maintenance development for ITER

    International Nuclear Information System (INIS)

    Tada, Eisuke; Shibanuma, Kiyoshi

    1997-01-01

    This paper both describes the overall design concept of the ITER remote maintenance system, which has been developed mainly for use with in-vessel components such as divertor and blanket, and outlines of the ITER R and D program, which has been established to develop remote handling equipment/tools and radiation hard components. In ITER, the reactor structures inside cryostat have to be maintained remotely because of activation due to DT operation. Therefore, remote-handling technology is fundamental, and the reactor-structure design must be made consistent with remote maintainability. The overall maintenance scenario and design concepts of the required remote handling equipment/tools have been developed according to their maintenance classification. Technologies are also being developed to verify the feasibility of the maintenance design and include fabrication and testing of a fullscale remote-handling equipment/tools for in-vessel maintenance. (author)

  18. The ITER project technological challenges

    CERN Multimedia

    CERN. Geneva; Lister, Joseph; Marquina, Miguel A; Todesco, Ezio

    2005-01-01

    The first lecture reminds us of the ITER challenges, presents hard engineering problems, typically due to mechanical forces and thermal loads and identifies where the physics uncertainties play a significant role in the engineering requirements. The second lecture presents soft engineering problems of measuring the plasma parameters, feedback control of the plasma and handling the physics data flow and slow controls data flow from a large experiment like ITER. The last three lectures focus on superconductors for fusion. The third lecture reviews the design criteria and manufacturing methods for 6 milestone-conductors of large fusion devices (T-7, T-15, Tore Supra, LHD, W-7X, ITER). The evolution of the designer approach and the available technologies are critically discussed. The fourth lecture is devoted to the issue of performance prediction, from a superconducting wire to a large size conductor. The role of scaling laws, self-field, current distribution, voltage-current characteristic and transposition are...

  19. The danger of iteration methods

    International Nuclear Information System (INIS)

    Villain, J.; Semeria, B.

    1983-01-01

    When a Hamiltonian H depends on variables phisub(i), the values of these variables which minimize H satisfy the equations deltaH/deltaphisub(i) = O. If this set of equations is solved by iteration, there is no guarantee that the solution is the one which minimizes H. In the case of a harmonic system with a random potential periodic with respect to the phisub(i)'s, the fluctuations have been calculated by Efetov and Larkin by means of the iteration method. The result is wrong in the case of a strong disorder. Even in the weak disorder case, it is wrong for a one-dimensional system and for a finite system of 2 particles. It is argued that the results obtained by iteration are always wrong, and that between 2 and 4 dimensions, spin-pair correlation functions decay like powers of the distance, as found by Aharony and Pytte for another model

  20. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    Science.gov (United States)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  1. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman

    2017-01-05

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  2. Statistical Signal Processing Using a Class of Iterative Estimation Algorithms.

    Science.gov (United States)

    1987-09-01

    applications where the speech signal is required, the MMSE estimate of the speech signal using the ML estimate of the parameters will be suggested. This MMSE ...It is easy to prove, using Stirling’s formula for factorial , that - N ! k’.! (6.7) where t (,) = - P, logp, is the entropy associated with the

  3. ITER ITA newsletter No. 31, June 2006

    International Nuclear Information System (INIS)

    2006-07-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about initialling the ITER Agreement and its related instruments by seven ITER parties, which too place in Brussels on 24 May 2006. The initialling constituted the final act of the ITER negotiations. It confirmed the Parties' common acceptance of the negotiated texts, ad referendum, and signalled their intentions to move forward towards the entry into force of the ITER Agreement as soon as possible. 'ITER - Uniting science today, global energy tomorrow' was the theme of a number of media events timed to accompany a remarkable day in the history of the ITER international venture, May 24th 2006, initialling of the ITER international agreement

  4. Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.

  5. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. [Purdue Univ., West Lafayette, IN (United States)

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  6. ITER Neutral Beam Injection System

    International Nuclear Information System (INIS)

    Ohara, Yoshihiro; Tanaka, Shigeru; Akiba, Masato

    1991-03-01

    A Japanese design proposal of the ITER Neutral Beam Injection System (NBS) which is consistent with the ITER common design requirements is described. The injection system is required to deliver a neutral deuterium beam of 75MW at 1.3MeV to the reactor plasma and utilized not only for plasma heating but also for current drive and current profile control. The injection system is composed of 9 modules, each of which is designed so as to inject a 1.3MeV, 10MW neutral beam. The most important point in the design is that the injection system is based on the utilization of a cesium-seeded volume negative ion source which can produce an intense negative ion beam with high current density at a low source operating pressure. The design value of the source is based on the experimental values achieved at JAERI. The utilization of the cesium-seeded volume source is essential to the design of an efficient and compact neutral beam injection system which satisfies the ITER common design requirements. The critical components to realize this design are the 1.3MeV, 17A electrostatic accelerator and the high voltage DC acceleration power supply, whose performances must be demonstrated prior to the construction of ITER NBI system. (author)

  7. Status of the ITER EDA

    International Nuclear Information System (INIS)

    Aymar, R.

    1999-01-01

    This article summarises progress made in the ITER Design Activities between October 1998 and February 1999. The three main focusses of the activity were on design work, on R and D work and on the physics basis. The consequences of diminishing financial funds and personnel are discussed and the state of the individual R and D projects is given briefly

  8. Informal meeting on ITER developments

    International Nuclear Information System (INIS)

    Canobbio, E.

    2000-01-01

    The International Fusion Research Council (IFRC), advisory body of the IAEA, organized an informal meeting on the general status and outlook for ITER, held October 9 at Sorrento, Italy, in conjunction with the 18th IAEA Fusion Energy Conference. This article describes the main events at the meeting

  9. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  10. ITER Council tour of Clarington site

    International Nuclear Information System (INIS)

    Dautovich, D.

    2001-01-01

    The ITER Council meeting was recently held in Toronto on 27 and 28 February. ITER Canada provided local arrangements for the Council meeting on behalf of Europe as the Official host. Following the meeting, on 1 March, ITER Canada conducted a tour of the proposed ITER construction site at Charington, and the ITER Council members attended a luncheon followed by a speech by Dr. Peter Barnard, Chairman and CEO of ITER Canada, at the Empire Club of Canada. The official invitation to participate in these events came from Dr. Peter Harrison, Deputy Minister of Natural Resources Canada. This report provides a brief summary of the events on 1 March

  11. ITER ITA Newsletter. No. 29, March 2006

    International Nuclear Information System (INIS)

    2006-05-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities and meetings, namely, the ITER Director-General Nominee, Dr. Kaname Ikeda, took up his position as ITER Project Leader in Cadarache on 13 March, the consolidation of information technology infrastructure for ITER and about he Thirty-Fifth Meeting of the Fusion Power Co-ordinating Committee (FPCC), which was held on 28 February-1 March 2006 at the headquarters of the International Energy Agency (IEA) in Paris

  12. ITER EDA Newsletter. V. 4, no. 7

    International Nuclear Information System (INIS)

    1995-07-01

    This ITER EDA (Engineering Design Activities) Newsletter issue contains reports on (i) the 8th meeting of the ITER Technical Advisory Committee (TAC-8) held on June 29 - July 7, 1995 at the ITER San Diego Work Site, (ii) the 8th meeting of the ITER Management Advisory Committee (MAC-8) held at the ITER San Diego Work Site on July 9-10, 1995, (iii) the 33rd meeting of the International Fusion Research Council (FRC), held July 11, 1995 at the IAEA Headquarters in Vienna, Austria, and (iv) the ITER participation in the fifth topical meeting on Tritium Technology in Fission, Fusion and Isotopic Applications

  13. A linear iterative unfolding method

    International Nuclear Information System (INIS)

    László, András

    2012-01-01

    A frequently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing, due to the well-known numerical ill behavior of this task. Various methods were invented which, given some assumptions on the initial probability distribution, try to regularize the unfolding problem. Most of these methods definitely introduce bias into the estimate of the initial probability distribution. We propose a linear iterative method (motivated by the Neumann series / Landweber iteration known in functional analysis), which has the advantage that no assumptions on the initial probability distribution is needed, and the only regularization parameter is the stopping order of the iteration, which can be used to choose the best compromise between the introduced bias and the propagated statistical and systematic errors. The method is consistent: 'binwise' convergence to the initial probability distribution is proved in absence of measurement errors under a quite general condition on the response function. This condition holds for practical applications such as convolutions, calorimeter response functions, momentum reconstruction response functions based on tracking in magnetic field etc. In presence of measurement errors, explicit formulae for the propagation of the three important error terms is provided: bias error (distance from the unknown to-be-reconstructed initial distribution at a finite iteration order), statistical error, and systematic error. A trade-off between these three error terms can be used to define an optimal iteration stopping criterion, and the errors can be estimated there. We provide a numerical C library for the implementation of the method, which incorporates automatic

  14. A New Iterative Numerical Continuation Technique for Approximating the Solutions of Scalar Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    Grégory Antoni

    2017-01-01

    Full Text Available The present study concerns the development of a new iterative method applied to a numerical continuation procedure for parameterized scalar nonlinear equations. Combining both a modified Newton’s technique and a stationary-type numerical procedure, the proposed method is able to provide suitable approximate solutions associated with scalar nonlinear equations. A numerical analysis of predictive capabilities of this new iterative algorithm is addressed, assessed, and discussed on some specific examples.

  15. Migration of vectorized iterative solvers to distributed memory architectures

    Energy Technology Data Exchange (ETDEWEB)

    Pommerell, C. [AT& T Bell Labs., Murray Hill, NJ (United States); Ruehl, R. [CSCS-ETH, Manno (Switzerland)

    1994-12-31

    Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment. In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.

  16. ITER EDA Newsletter. Vol. 1, No. 1

    International Nuclear Information System (INIS)

    1992-11-01

    After the ITER Engineering Design Activities (EDA) Agreement and Protocol 1 had been signed by the four ITER parties on July 21, 1992 and had entered into force, the ITER Council suggested at its first meeting (Vienna, September 10-11, 1992) that the publication of the ITER Newsletter be continued during the EDA with assistance of the International Atomic Energy Agency. This suggestion was supported by the Agency and subsequently the ITER office in Vienna assumed its responsibilities for planning and executing activities related to the publication of the Newsletter. The ITER EDA Newsletter is planned to be a monthly publication aimed at disseminating broad information and understanding, including the description of the personal and institutional involvements in the ITER project in addition to technical facts about it. The responsibility for the Newsletter rests with the ITER council. In this first issue the signing of the ITER EDA Activities and Protocol 1 is reported. The EDA organizational structure is described. This issue also reports on the first ITER EDA council meeting, the opening of the ITER EDA NAKA Co-Centre, the first meeting of the ITER Technical Advisory Committee, activities of special working groups, an ITER Technical Meeting, as well as ''News in Brief'' and ''Coming Events''

  17. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  18. The use of value iteration to minimize the costs of shipping different ...

    African Journals Online (AJOL)

    We considered the use of value iteration to minimize the costs of shipping different goods from the factories to the markets. We also determined the policy that will yield the optimum costs in the operation. Seven different policies were considered in solving the problem. The ompany estimated 10% of the goods in transit to be ...

  19. Meeting of the ITER Council

    International Nuclear Information System (INIS)

    Drew, M.

    2001-01-01

    Full text: A meeting of the ITER Council took place in Toronto, Canada, on 27-28 February 2001 (Canada participates in the ITER EDA as an associate of the EU Party). The delegations to the Council were led by Dr. U. Finzi, Principal Advisor in charge of Fusion R and D in the Directorate-General for Research of the European Commission, Mr. T. Sugawa, Deputy Director-General of the Research and Development Bureau of the Ministry of Education, Culture, Sport, Science and Technology of Japan, and Academician E. Velikhov, President of the RRC ''Kurchatov Institute''. The European delegation was joined by Canadian experts including a representative from the Canadian Department of Natural Resources. The Council heard presentations from Dr. H. Kishimoto on the successful completion of the Explorations concerning future joint implementation of ITER, and from Dr. J.-P. Rager on the ITER International Industry Liaison Meeting held in Toronto in November 2000. Having noted statements of Parties' status, in particular concerning the readiness to start negotiations and the progress toward site offers, the Council encouraged the Parties to pursue preparations toward future implementation of ITER along the general lines proposed in the Explorers' final report. The Council also noted the readiness the of the RF and EU Parties to instruct specified current JCT members to remain at their places of assignment after the end of the EDA, in preparation for a transition to the Co-ordinated Technical Activities foreseen as support to ITER negotiations. The Council was pleased to hear that meetings with the Director of the ITER Parties' Designated Safety Representatives had started, and commended the progress toward achieving timely licensing processes with a good common understanding. The Council noted with appreciation the Director's view that no difficulties of principle in the licensing approach had been identified during the informal discussions with the regulatory representatives and

  20. Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method

    Directory of Open Access Journals (Sweden)

    Ling-yun Xu

    2012-01-01

    Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.

  1. ITER EDA newsletter. V. 2, no. 11

    International Nuclear Information System (INIS)

    1993-11-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter contains an ITER EDA Status Report, and a report on the Fourth International Fusion Neutronics Workshop at the University of California, Los Angeles Campus, October 20-21, 1993

  2. Iterative perceptual learning for social behavior synthesis

    NARCIS (Netherlands)

    de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized

  3. Iterative Perceptual Learning for Social Behavior Synthesis

    NARCIS (Netherlands)

    de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of

  4. ITER EDA newsletter. V. 5, no. 8

    International Nuclear Information System (INIS)

    1996-08-01

    This issue of the Newsletter on the Engineering Design Activities (EDA) for the ITER Tokamak project contains a report on the divertor remote handling development (and of a summer party at the ITER Joint Work Site in Garching, Germany)

  5. ITER EDA newsletter. V. 7, special issue

    International Nuclear Information System (INIS)

    1998-07-01

    In conjunction with the ITER Council Meeting, a ceremony was held at the IAEA Headquarters in Vienna on 22 July 1998 to celebrate the achievements of the ITER Engineering Design Activities during the period 1992-1998

  6. ITER EDA newsletter. V. 6, no. 12

    International Nuclear Information System (INIS)

    1997-12-01

    This issue of the ITER Newsletter contains summary reports (i) on the Sixth ITER Technical Meeting on Safety and Environment and (ii) on JAERI's Annual Public Seminar on Fusion Research and Development

  7. ITER EDA newsletter. V. 8, no. 7

    International Nuclear Information System (INIS)

    1999-07-01

    This newsletter contains an article concerning the ITER divertor cassette project meeting in Bologna, Italy (May 26-28, 1999), and an emotional outburst, concerning the closure of the ITER site in San Diego, USA

  8. Preliminary RAMI analysis of DFLL TBS for ITER

    International Nuclear Information System (INIS)

    Wang, Dagui; Yuan, Run; Wang, Jiaqun; Wang, Fang; Wang, Jin

    2016-01-01

    Highlights: • We performed the functional analysis of the DFLL TBS. • We performed a failure mode analysis of the DFLL TBS. • We estimated the reliability and availability of the DFLL TBS. • The ITER RAMI approach was applied to the DFLL TBS for technical risk control in the design phase. - Abstract: ITER is the first fusion machine fully designed to prove the physics and technological basis for next fusion power plants. Among the main technical objectives of ITER is to test and validate design concepts of tritium breeding blankets relevant to the fusion power plants. To achieve this goal, China has proposed the dual functional lithium-lead test blanket module (DFLL TBM) concept design. The DFLL TBM and its associated ancillary system were called DFLL TBS. The DFLL TBS play a key role in next fusion reactor. In order to ensure reliable and available of DFLL TBS, the risk control project of DFLL TBS has been put on the schedule. As the stage of the ITER technical risk control policy, the RAMI (Reliability, Availability, Maintainability, Inspectability) approach was used to control the technical risk of ITER. In this paper, the RAMI approach was performed on the conceptual design of DFLL TBS. A functional breakdown was prepared on DFLL TBS, and the system was divided into 3 main functions and 72 basic functions. Based on the result of functional breakdown of DFLL TBS, the reliability block diagrams were prepared to estimate the reliability and availability of each function under the stipulated operating conditions. The inherent availability of the DFLL TBS expected after implementation of mitigation actions was calculated to be 98.57% over 2 years based on the ITER reliability database. A Failure Modes Effects and Criticality Analysis (FMECA) was performed with criticality charts highlighting the risk level of the different failure modes with regard to their probability of occurrence and their effects on the availability.

  9. Preliminary RAMI analysis of DFLL TBS for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dagui [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); University of Science and Technology of China, Hefei, Anhui, 230031 (China); Yuan, Run [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Wang, Jiaqun, E-mail: jiaqun.wang@fds.org.cn [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Wang, Fang; Wang, Jin [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China)

    2016-11-15

    Highlights: • We performed the functional analysis of the DFLL TBS. • We performed a failure mode analysis of the DFLL TBS. • We estimated the reliability and availability of the DFLL TBS. • The ITER RAMI approach was applied to the DFLL TBS for technical risk control in the design phase. - Abstract: ITER is the first fusion machine fully designed to prove the physics and technological basis for next fusion power plants. Among the main technical objectives of ITER is to test and validate design concepts of tritium breeding blankets relevant to the fusion power plants. To achieve this goal, China has proposed the dual functional lithium-lead test blanket module (DFLL TBM) concept design. The DFLL TBM and its associated ancillary system were called DFLL TBS. The DFLL TBS play a key role in next fusion reactor. In order to ensure reliable and available of DFLL TBS, the risk control project of DFLL TBS has been put on the schedule. As the stage of the ITER technical risk control policy, the RAMI (Reliability, Availability, Maintainability, Inspectability) approach was used to control the technical risk of ITER. In this paper, the RAMI approach was performed on the conceptual design of DFLL TBS. A functional breakdown was prepared on DFLL TBS, and the system was divided into 3 main functions and 72 basic functions. Based on the result of functional breakdown of DFLL TBS, the reliability block diagrams were prepared to estimate the reliability and availability of each function under the stipulated operating conditions. The inherent availability of the DFLL TBS expected after implementation of mitigation actions was calculated to be 98.57% over 2 years based on the ITER reliability database. A Failure Modes Effects and Criticality Analysis (FMECA) was performed with criticality charts highlighting the risk level of the different failure modes with regard to their probability of occurrence and their effects on the availability.

  10. Modeling of ITER related vacuum gas pumping distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Misdanitis, Serafeim [University of Thessaly, Department of Mechanical Engineering, Pedion Areos, 38334 Volos (Greece); Association EURATOM - Hellenic Republic (Greece); Valougeorgis, Dimitris, E-mail: diva@mie.uth.gr [University of Thessaly, Department of Mechanical Engineering, Pedion Areos, 38334 Volos (Greece); Association EURATOM - Hellenic Republic (Greece)

    2013-10-15

    Highlights: • An algorithm to simulate vacuum gas flows through pipe networks consisting of long channels and channels of moderate length has been developed. • Analysis and results are based on kinetic theory as described by the BGK kinetic model equation. • The algorithm is capable of computing the mass flow rates (or the conductance) through the pipes and the pressure at the nodes of the network. • Since a kinetic approach is implemented, the algorithm is valid in the whole range of the Knudsen number. • The developed algorithm will be useful for simulating the vacuum distribution systems of ITER and future fusion reactors. -- Abstract: A novel algorithm recently developed to solve steady-state isothermal vacuum gas dynamics flows through pipe networks consisting of long tubes is extended to include, in addition to long channels, channels of moderate length 10 < L/D < 50. This is achieved by implementing the so-called end effect treatment/correction. Analysis and results are based on kinetic theory as described by the Boltzmann equation or associated reliable kinetic model equations. For a pipe network of known geometry the algorithm is capable of computing the mass flow rates (or the conductance) through the pipes as well as the pressure heads at the nodes of the network. The feasibility of the approach is demonstrated by simulating two ITER related vacuum distribution systems, one in the viscous regime and a second one in a wide range of Knudsen numbers. Since a kinetic approach is implemented, the algorithm is valid and the results are accurate in the whole range of the Knudsen number, while the involved computational effort remains small.

  11. Research on Open-Closed-Loop Iterative Learning Control with Variable Forgetting Factor of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Hongbin Wang

    2016-01-01

    Full Text Available We propose an iterative learning control algorithm (ILC that is developed using a variable forgetting factor to control a mobile robot. The proposed algorithm can be categorized as an open-closed-loop iterative learning control, which produces control instructions by using both previous and current data. However, introducing a variable forgetting factor can weaken the former control output and its variance in the control law while strengthening the robustness of the iterative learning control. If it is applied to the mobile robot, this will reduce position errors in robot trajectory tracking control effectively. In this work, we show that the proposed algorithm guarantees tracking error bound convergence to a small neighborhood of the origin under the condition of state disturbances, output measurement noises, and fluctuation of system dynamics. By using simulation, we demonstrate that the controller is effective in realizing the prefect tracking.

  12. ITER EDA newsletter. V. 5, no. 10

    International Nuclear Information System (INIS)

    1996-10-01

    This issue of the newsletter on the Engineering Design Activities (EDA) for the ITER Tokamak project contains a report on the Fifth ITER Technical Meeting on Safety, Environment, and Regulatory Approval, held September 29 - October 7, 1996 at the ITER San Diego Joint Work Site; and a report on the Fifth ITER Diagnostics Expert Group Workshop and Technical Meeting on Diagnostics held in Montreal, Canada, 12-13 October 1996

  13. ITER EDA newsletter. V. 9, no. 8

    International Nuclear Information System (INIS)

    2000-08-01

    This ITER EDA Newsletter reports on the ITER meeting on 29-30 June 2000 in Moscow, summarizes the status report on the ITER EDA by R. Aymar, the ITER Director, and gives overviews of the expert group workshop on transport and internal barrier physics, confinement database and modelling and edge and pedestal physics, and the IEA workshop on transport barriers at edge and core. Individual abstracts have been prepared

  14. ITER EDA newsletter. V. 5, no. 5

    International Nuclear Information System (INIS)

    1996-05-01

    This issues of the ITER Engineering Design Activities Newsletter contains a report on the Tenth Meeting of the ITER Management Advisory Committee held at JAERI Headquarters, Tokyo, June 5-6, 1996; on the Fourth ITER Divertor Physics and Divertor Modelling and Database Expert Group Workshop, held at the San Diego ITER Joint Worksite, March 11-15, 1996, and on the Agenda for the 16th IAEA Fusion Energy Conference (7-11 October 1996)

  15. ITER EDA newsletter. V. 9, no. 11

    International Nuclear Information System (INIS)

    2000-11-01

    This issue of the ITER EDA Newsletter contains discussions of three meetings, i.e., (1) the Third ITER International Industry Liaison Meeting held in Toronto, Canada (November 7-9, 2000), (2) an informal meeting on ITER developments held in Sorrento, Italy (October 9, 2000), and (3) the Thirteenth Meeting of the ITER Physics Expert Group on Diagnostics held in Naka, Japan (September 21-22, 2000)

  16. Colorado Conference on iterative methods. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    The conference provided a forum on many aspects of iterative methods. Volume I topics were:Session: domain decomposition, nonlinear problems, integral equations and inverse problems, eigenvalue problems, iterative software kernels. Volume II presents nonsymmetric solvers, parallel computation, theory of iterative methods, software and programming environment, ODE solvers, multigrid and multilevel methods, applications, robust iterative methods, preconditioners, Toeplitz and circulation solvers, and saddle point problems. Individual papers are indexed separately on the EDB.

  17. A transport synthetic acceleration method for transport iterations

    International Nuclear Information System (INIS)

    Ramone, G.L.; Adams, M.L.

    1997-01-01

    A family of transport synthetic acceleration (TSA) methods for iteratively solving within group scattering problems is presented. A single iteration in these schemes consists of a transport sweep followed by a low-order calculation, which itself is a simplified transport problem. The method for isotropic-scattering problems in X-Y geometry is described. The Fourier analysis of a model problem for equations with no spatial discretization shows that a previously proposed TSA method is unstable in two dimensions but that the modifications make it stable and rapidly convergent. The same procedure for discretized transport equations, using the step characteristic and two bilinear discontinuous methods, shows that discretization enhances TSA performance. A conjugate gradient algorithm for the low-order problem is described, a crude quadrature set for the low-order problem is proposed, and the number of low-order iterations per high-order sweep is limited to a relatively small value. These features lead to simple and efficient improvements to the method. TSA is tested on a series of problems, and a set of parameters is proposed for which the method behaves especially well. TSA achieves a substantial reduction in computational cost over source iteration, regardless of discretization parameters or material properties, and this reduction increases with the difficulty of the problem

  18. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  19. ITER EDA newsletter. V. 7, no. 12

    International Nuclear Information System (INIS)

    1998-12-01

    This edition of the ITER EDA Newsletter is dedicated to celebrate the achievements of the ITER activities at the San Diego Joint Work Site. Articles by E. Velikhov, A. Davies and R. Aymar mark the final days of American participation in the ITER program

  20. ITER ITA newsletter. No. 21, April 2005

    International Nuclear Information System (INIS)

    2005-05-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about Russian federation Participant Team's activity in the area of preparation for ITER construction and information about International Fusion materials irradiation Facility(IRMIF) project and prospects for implementation

  1. ITER EDA newsletter. V. 2, no. 12

    International Nuclear Information System (INIS)

    1993-12-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter contains a report of the Second ITER Technical Committee Meeting on Safety, Environment, and Regulatory Approval, San Diego, USA, November 3-12, 1993, and a summary report on an ITER Magnet Technical Meeting, Naka, Japan, October 5-8, 1993

  2. ITER EDA newsletter. V. 7, no. 10

    International Nuclear Information System (INIS)

    1998-10-01

    This newsletter contains three articles, namely a report on an ITER meeting (October 20-21,1998) in Yokohama, Japan, a short note on the 17th IAEA Fusion Energy Conference (October 19-24, 1998) in Yokohama and a monograph by ITER Director R. Aymar on 'the Legacy of Artsimovitch and the lessons of ITER'

  3. ITER EDA newsletter. V. 8, no. 11

    International Nuclear Information System (INIS)

    1999-11-01

    This ITER EDA Newsletter contains summary reports on the eleventh meeting of the ITER diagnostic expert group in Cadarache, France, on the ITER JCT presentation at the international conference on fusion reactor materials in Colorado Springs, USA and on the seventh workshop on plasma edge theory in fusion devices in Tajimi, Japan. Individual abstracts are prepared for the three contributions

  4. ITER EDA newsletter. V. 6, no. 2

    International Nuclear Information System (INIS)

    1997-02-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter reports on the ITER divertor development project and its objectives; contains a report on the 16th Energy IAEA Fusion Conference (ITER and other Tokamak Issues) held in Montreal, Canada; 287 papers were selected by the Programme Committee for presentation and 178 posters were presented. 3 figs

  5. ITER EDA newsletter. V. 9, no. 9

    International Nuclear Information System (INIS)

    2000-09-01

    This ITER EDA Newsletter contains the following 5 contributions: CSMC and CSIC charging tests successfully completed; The ITER divertor cassette project meeting; Blanket R and D and design task meeting; IAEA technical committee meeting on fusion safety; ITER L-6 large project ''blanket remote handling and maintenance''

  6. Final ITER CTA project board meeting

    International Nuclear Information System (INIS)

    Vlasenkov, V.

    2003-01-01

    The final ITER CTA Project Board Meeting (PB) took place in Barcelona, Spain on 8 December 2002. The PB took notes of the comments concerning the status of the International Team and the Participants Teams, including Dr. Aymar's report 'From ITER to a FUSION Power Reactor' and the assessment of the ITER project cost estimate

  7. ITER EDA newsletter. V. 10, no. 3

    International Nuclear Information System (INIS)

    2001-03-01

    This issue contains a report on the meeting of the ITER Council (M. Drew), a report on the ITER EDA status (Dr. R. Aymar), a report on the ITER Council tour of the Clarington Site (Dr. D. Dautovich) . Abstracts of the indivdual reports have been included in the database

  8. ITER EDA newsletter. V. 2, no. 9

    International Nuclear Information System (INIS)

    1993-09-01

    This ITER EDA (Engineering Design Activities) Newsletter issue contains a report on the third meeting of the ITER Technical Advisory Committee, a summary report for the ITER Magnetic Technical Meeting, a brief account of the International Workshop on Nuclear Data for Fusion Reactor Technology, and a description of approved arrangements for visiting home team personnel

  9. ITER EDA Newsletter. V. 2, no. 1

    International Nuclear Information System (INIS)

    1993-01-01

    This ITER EDA (Engineering Design Activities) Newsletter issue is dedicated to the description of the ITER EDA Home Teams (European Community, Japan, Russian Federation, USA), in particular their composition, tasks, responsibilities, national support and activities, aimed to design the ITER tokamak

  10. Foundations of statistical algorithms with references to R packages

    CERN Document Server

    Weihs, Claus; Ligges, Uwe

    2013-01-01

    A new and refreshingly different approach to presenting the foundations of statistical algorithms, Foundations of Statistical Algorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today's more powerful statistical algorithms. It emphasizes recurring themes in all statistical algorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques

  11. The international thermonuclear reactor (ITER)

    International Nuclear Information System (INIS)

    Fowler, T.K.; Henning, C.D.

    1987-01-01

    Four governmental groups, representing Europe, Japan, USSR and U.S. met in March 1987 to consider a new international design of a magnetic fusion device for the 1990's. An interim group was appointed. The author gives a brief synopsis of what might be thought of as a draft charter. The starting point is the objective of the ITER device, which is summarized as demonstrating both scientific and technical feasibility of fusion. The paper presents an update on the current thinking and technical aspects for the International Thermonuclear Experimental Reactor (ITER). This covers not only what is happening in the U.S. but also some reports of preliminary thinking of the last technical work that occurred in Vienna

  12. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  13. Iterative image reconstruction and its role in cardiothoracic computed tomography.

    Science.gov (United States)

    Singh, Sarabjeet; Khawaja, Ranish Deedar Ali; Pourjabbar, Sarvenaz; Padole, Atul; Lira, Diego; Kalra, Mannudeep K

    2013-11-01

    Revolutionary developments in multidetector-row computed tomography (CT) scanner technology offer several advantages for imaging of cardiothoracic disorders. As a result, expanding applications of CT now account for >85 million CT examinations annually in the United States alone. Given the large number of CT examinations performed, concerns over increase in population-based risk for radiation-induced carcinogenesis have made CT radiation dose a top safety concern in health care. In response to this concern, several technologies have been developed to reduce the dose with more efficient use of scan parameters and the use of "newer" image reconstruction techniques. Although iterative image reconstruction algorithms were first introduced in the 1970s, filtered back projection was chosen as the conventional image reconstruction technique because of its simplicity and faster reconstruction times. With subsequent advances in computational speed and power, iterative reconstruction techniques have reemerged and have shown the potential of radiation dose optimization without adversely influencing diagnostic image quality. In this article, we review the basic principles of different iterative reconstruction algorithms and their implementation for various clinical applications in cardiothoracic CT examinations for reducing radiation dose.

  14. ITER-FEAT outline design report

    International Nuclear Information System (INIS)

    2001-01-01

    In July 1998 the ITER Parties were unable, for financial reasons, to proceed with construction of the ITER design proposed at that time, to meet the detailed technical objectives and target cost set in 1992. It was therefore decided to investigate options for the design of ITER with reduced technical objectives and with possibly decreased technical margins, whose target construction cost was one half that of the 1998 ITER design, while maintaining the overall programmatic objective. To identify designs that might meet the revised objectives, task forces involving the JCT and Home Teams met during 1998 and 1999 to analyse and compare a range of options for the design of such a device. This led at the end of 1999 to a single configuration for the ITER design with parameters considered to be the most credible consistent with technical limitations and the financial target, yet meeting fully the objectives with appropriate margins. This new design of ITER, called ''ITER-FEAT'', was submitted to the ITER Director to the ITER Parties as the ''ITER-FEAT Outline Design Report'' (ODR) in January 2000, at their meeting in Tokyo. The Parties subsequently conducted their domestic assessments of this report and fed the resulting comments back into the progressing design. The progress on the developing design was reported to the ITER Technical Advisory Committee (TAC) in June 2000 in the report ''Progress in Resolving Open Design Issues from the ODR'' alongside a report on Progress in Technology R and D for ITER. In addition, the progress in the ITER-FEAT Design and Validating R and D was reported to the ITER Parties. The ITER-FEAT design was subsequently approved by the governing body of ITER in Moscow in June 2000 as the basis for the preparation of the Final Design Report, recognising it as a single mature design for ITER consistent with its revised objectives. This volume contains the documents pertinent to the process described above. More detailed technical information

  15. Evaluating iterative reconstruction performance in computed tomography.

    Science.gov (United States)

    Chen, Baiyu; Ramirez Giraldo, Juan Carlos; Solomon, Justin; Samei, Ehsan

    2014-12-01

    Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d'). d' was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1-4 mm), contrast levels (10-100 HU), and edge profiles (sharp and soft). Unique d' values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDIvol: 3.4-64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d' values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction potentials (11-54 mGy, 77%-84%), followed by

  16. US ITER limiter module design

    International Nuclear Information System (INIS)

    Mattas, R.F.; Billone, M.; Hassanein, A.

    1996-08-01

    The recent U.S. effort on the ITER (International Thermonuclear Experimental Reactor) shield has been focused on the limiter module design. This is a multi-disciplinary effort that covers design layout, fabrication, thermal hydraulics, materials evaluation, thermo- mechanical response, and predicted response during off-normal events. The results of design analyses are presented. Conclusions and recommendations are also presented concerning, the capability of the limiter modules to meet performance goals and to be fabricated within design specifications using existing technology

  17. Truncated States Obtained by Iteration

    International Nuclear Information System (INIS)

    Cardoso, W. B.; Almeida, N. G. de

    2008-01-01

    We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST

  18. Matlab modeling of ITER CODAC

    International Nuclear Information System (INIS)

    Pangione, L.; Lister, J.B.

    2008-01-01

    The ITER CODAC (COntrol, Data Access and Communication) conceptual design resulted from 2 years of activity. One result was a proposed functional partitioning of CODAC into different CODAC Systems, each of them partitioned into other CODAC Systems. Considering the large size of this project, simple use of human language assisted by figures would certainly be ineffective in creating an unambiguous description of all interactions and all relations between these Systems. Moreover, the underlying design is resident in the mind of the designers, who must consider all possible situations that could happen to each system. There is therefore a need to model the whole of CODAC with a clear and preferably graphical method, which allows the designers to verify the correctness and the consistency of their project. The aim of this paper is to describe the work started on ITER CODAC modeling using Matlab/Simulink. The main feature of this tool is the possibility of having a simple, graphical, intuitive representation of a complex system and ultimately to run a numerical simulation of it. Using Matlab/Simulink, each CODAC System was represented in a graphical and intuitive form with its relations and interactions through the definition of a small number of simple rules. In a Simulink diagram, each system was represented as a 'black box', both containing, and connected to, a number of other systems. In this way it is possible to move vertically between systems on different levels, to show the relation of membership, or horizontally to analyse the information exchange between systems at the same level. This process can be iterated, starting from a global diagram, in which only CODAC appears with the Plant Systems and the external sites, and going deeper down to the mathematical model of each CODAC system. The Matlab/Simulink features for simulating the whole top diagram encourage us to develop the idea of completing the functionalities of all systems in order to finally have a full

  19. ITER merges energies in Provence

    International Nuclear Information System (INIS)

    Barla, J.Ch.

    2009-01-01

    The works around the Cadarache site where the experimental nuclear fusion reactor ITER is to be built have already generated about 366 million euros of contracts and provisions with French companies by September 30, 2009. The advance of the project should bring 3000 to 4000 persons more around the site but the Provence region suffers from the lack of a real projected management of employment and skills. (J.S.)

  20. Development of a versatile algorithm for optimization of radiation therapy

    International Nuclear Information System (INIS)

    Gustafsson, Anders.

    1996-12-01

    A flexible iterative gradient algorithm for radiation therapy optimization has been developed. The algorithm is based on dose calculation using the pencil-beam description of external radiation beams in uniform and heterogeneous patients. The properties of the algorithm are described, including its ability to treat variable bounds and linear constraints, its efficiency in gradient calculation, its convergence properties and termination criteria. 116 refs