DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Miltersen, Peter Bro; Zwick, Uri
2011-01-01
Ye showed recently that the simplex method with Dantzig pivoting rule, as well as Howard's policy iteration algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate after...... iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard's policy iteration algorithm used for solving 2-player turn-based stochastic games with discounted zero...
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Miltersen, Peter Bro; Zwick, Uri
2013-01-01
Ye [2011] showed recently that the simplex method with Dantzig’s pivoting rule, as well as Howard’s policy iteration algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate...... terminates after at most O(m1−γ log n1−γ) iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard’s policy iteration algorithm used for solving 2-player turn-based...... for 2-player turn-based stochastic games; it is strongly polynomial for a fixed discount factor, and exponential otherwise....
Energy Technology Data Exchange (ETDEWEB)
Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H. [Univ. of Texas, Austin, TX (United States)
1996-12-31
The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.
Energy Technology Data Exchange (ETDEWEB)
Myers, N.J. [Univ. of Durham (United Kingdom)
1994-12-31
The author gives a hybrid method for the iterative solution of linear systems of equations Ax = b, where the matrix (A) is nonsingular, sparse and nonsymmetric. As in a method developed by Starke and Varga the method begins with a number of steps of the Arnoldi method to produce some information on the location of the spectrum of A. This method then switches to an iterative method based on the Faber polynomials for an annular sector placed around these eigenvalue estimates. The Faber polynomials for an annular sector are used because, firstly an annular sector can easily be placed around any eigenvalue estimates bounded away from zero, and secondly the Faber polynomials are known analytically for an annular sector. Finally the author gives three numerical examples, two of which allow comparison with Starke and Varga`s results. The third is an example of a matrix for which many iterative methods would fall, but this method converges.
International Nuclear Information System (INIS)
Yahiaoui, S.-A.; Bentaiba, M.
2011-01-01
We present a method for obtaining the quasi-exact solutions of the Rabi Hamiltonian in the framework of the asymptotic iteration method (AIM). The energy eigenvalues, the eigenfunctions and the associated Bender-Dunne orthogonal polynomials are deduced. We show (i) that orthogonal polynomials are generated from the upper limit (i.e., truncation limit) of polynomial solutions deduced from AIM, and (ii) prove to have nonpositive norm. (authors)
Variational Iteration Method for Fifth-Order Boundary Value Problems Using He's Polynomials
Directory of Open Access Journals (Sweden)
Muhammad Aslam Noor
2008-01-01
Full Text Available We apply the variational iteration method using He's polynomials (VIMHP for solving the fifth-order boundary value problems. The proposed method is an elegant combination of variational iteration and the homotopy perturbation methods and is mainly due to Ghorbani (2007. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The proposed iterative scheme finds the solution without any discritization, linearization, or restrictive assumptions. Several examples are given to verify the reliability and efficiency of the method. The fact that the proposed technique solves nonlinear problems without using Adomian's polynomials can be considered as a clear advantage of this algorithm over the decomposition method.
Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J
2017-12-01
Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.
New polynomial-based molecular descriptors with low degeneracy.
Directory of Open Access Journals (Sweden)
Matthias Dehmer
Full Text Available In this paper, we introduce a novel graph polynomial called the 'information polynomial' of a graph. This graph polynomial can be derived by using a probability distribution of the vertex set. By using the zeros of the obtained polynomial, we additionally define some novel spectral descriptors. Compared with those based on computing the ordinary characteristic polynomial of a graph, we perform a numerical study using real chemical databases. We obtain that the novel descriptors do have a high discrimination power.
Dynamics of a new family of iterative processes for quadratic polynomials
Gutiérrez, J. M.; Hernández, M. A.; Romero, N.
2010-03-01
In this work we show the presence of the well-known Catalan numbers in the study of the convergence and the dynamical behavior of a family of iterative methods for solving nonlinear equations. In fact, we introduce a family of methods, depending on a parameter . These methods reach the order of convergence m+2 when they are applied to quadratic polynomials with different roots. Newton's and Chebyshev's methods appear as particular choices of the family appear for m=0 and m=1, respectively. We make both analytical and graphical studies of these methods, which give rise to rational functions defined in the extended complex plane. Firstly, we prove that the coefficients of the aforementioned family of iterative processes can be written in terms of the Catalan numbers. Secondly, we make an incursion into its dynamical behavior. In fact, we show that the rational maps related to these methods can be written in terms of the entries of the Catalan triangle. Next we analyze its general convergence, by including some computer plots showing the intricate structure of the Universal Julia sets associated with the methods.
vs. a polynomial chaos-based MCMC
Siripatana, Adil
2014-08-01
Bayesian Inference of Manning\\'s n coefficient in a Storm Surge Model Framework: comparison between Kalman lter and polynomial based method Adil Siripatana Conventional coastal ocean models solve the shallow water equations, which describe the conservation of mass and momentum when the horizontal length scale is much greater than the vertical length scale. In this case vertical pressure gradients in the momentum equations are nearly hydrostatic. The outputs of coastal ocean models are thus sensitive to the bottom stress terms de ned through the formulation of Manning\\'s n coefficients. This thesis considers the Bayesian inference problem of the Manning\\'s n coefficient in the context of storm surge based on the coastal ocean ADCIRC model. In the first part of the thesis, we apply an ensemble-based Kalman filter, the singular evolutive interpolated Kalman (SEIK) filter to estimate both a constant Manning\\'s n coefficient and a 2-D parameterized Manning\\'s coefficient on one ideal and one of more realistic domain using observation system simulation experiments (OSSEs). We study the sensitivity of the system to the ensemble size. we also access the benefits from using an in ation factor on the filter performance. To study the limitation of the Guassian restricted assumption on the SEIK lter, 5 we also implemented in the second part of this thesis a Markov Chain Monte Carlo (MCMC) method based on a Generalized Polynomial chaos (gPc) approach for the estimation of the 1-D and 2-D Mannning\\'s n coe cient. The gPc is used to build a surrogate model that imitate the ADCIRC model in order to make the computational cost of implementing the MCMC with the ADCIRC model reasonable. We evaluate the performance of the MCMC-gPc approach and study its robustness to di erent OSSEs scenario. we also compare its estimates with those resulting from SEIK in term of parameter estimates and full distributions. we present a full analysis of the solution of these two methods, of the
Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao
2018-02-01
Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.
Polynomial fuzzy model-based approach for underactuated surface vessels
DEFF Research Database (Denmark)
Khooban, Mohammad Hassan; Vafamand, Navid; Dragicevic, Tomislav
2018-01-01
The main goal of this study is to introduce a new polynomial fuzzy model-based structure for a class of marine systems with non-linear and polynomial dynamics. The suggested technique relies on a polynomial Takagi–Sugeno (T–S) fuzzy modelling, a polynomial dynamic parallel distributed compensation...... surface vessel (USV). Additionally, in order to overcome the USV control challenges, including the USV un-modelled dynamics, complex nonlinear dynamics, external disturbances and parameter uncertainties, the polynomial fuzzy model representation is adopted. Moreover, the USV-based control structure...... and a sum-of-squares (SOS) decomposition. The new proposed approach is a generalisation of the standard T–S fuzzy models and linear matrix inequality which indicated its effectiveness in decreasing the tracking time and increasing the efficiency of the robust tracking control problem for an underactuated...
On permutation polynomials over ﬁnite ﬁelds: diﬀerences and iterations
DEFF Research Database (Denmark)
Anbar Meidl, Nurdagül; Odzak, Almasa; Patel, Vandita
2017-01-01
The Carlitz rank of a permutation polynomial f over a finite field Fq is a simple concept that was introduced in the last decade. Classifying permutations over Fq with respect to their Carlitz ranks has some advantages, for instance f with a given Carlitz rank can be approximated by a rational li...
Superiority of Bessel function over Zernicke polynomial as base ...
Indian Academy of Sciences (India)
Abstract. Here we describe the superiority of Bessel function as base function for radial expan- sion over Zernicke polynomial in the tomographic reconstruction technique. The causes for the superiority have been described in detail. The superiority has been shown both with simulated data for Kadomtsev's model for ...
Directory of Open Access Journals (Sweden)
Liyun Su
2010-01-01
obtaining the point spread function (PSF parameter, iterative wiener filter is adopted to complete the restoration. We experimentally illustrate its performance on simulated data and real blurred image. Results show that the proposed PSF parameter estimation technique and the image restoration method are effective.
Polynomial factor models : non-iterative estimation via method-of-moments
Schuberth, Florian; Büchner, Rebecca; Schermelleh-Engel, Karin; Dijkstra, Theo K.
2017-01-01
We introduce a non-iterative method-of-moments estimator for non-linear latent variable (LV) models. Under the assumption of joint normality of all exogenous variables, we use the corrected moments of linear combinations of the observed indicators (proxies) to obtain consistent path coefficient and
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
Energy Technology Data Exchange (ETDEWEB)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
International Nuclear Information System (INIS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-01-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10
Design of a polynomial ring based symmetric homomorphic encryption scheme
Directory of Open Access Journals (Sweden)
Smaranika Dasgupta
2016-09-01
Full Text Available Security of data, especially in clouds, has become immensely essential for present-day applications. Fully homomorphic encryption (FHE is a great way to secure data which is used and manipulated by untrusted applications or systems. In this paper, we propose a symmetric FHE scheme based on polynomial over ring of integers. This scheme is somewhat homomorphic due to accumulation of noise after few operations, which is made fully homomorphic using a refresh procedure. After certain amount of homomorphic computations, large ciphertexts are refreshed for proper decryption. The hardness of the scheme is based on the difficulty of factorizing large integers. Also, it requires polynomial addition which is computationally cost effective. Experimental results are shown to support our claim.
Image Compression Based On Wavelet, Polynomial and Quadtree
Directory of Open Access Journals (Sweden)
Bushra A. SULTAN
2011-01-01
Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.
Model-based multi-fringe interferometry using Zernike polynomials
Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan
2018-06-01
In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.
International Nuclear Information System (INIS)
Yuste, Santos Bravo; Abad, Enrique
2011-01-01
We present an iterative method to obtain approximations to Bessel functions of the first kind J p (x) (p > -1) via the repeated application of an integral operator to an initial seed function f 0 (x). The class of seed functions f 0 (x) leading to sets of increasingly accurate approximations f n (x) is considerably large and includes any polynomial. When the operator is applied once to a polynomial of degree s, it yields a polynomial of degree s + 2, and so the iteration of this operator generates sets of increasingly better polynomial approximations of increasing degree. We focus on the set of polynomial approximations generated from the seed function f 0 (x) = 1. This set of polynomials is useful not only for the computation of J p (x) but also from a physical point of view, as it describes the long-time decay modes of certain fractional diffusion and diffusion-wave problems.
Li, Xiaomiao; Lam, Hak Keung; Song, Ge; Liu, Fucai
2017-01-01
This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based ({PFMB}) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. A sum-of-squares (SOS)-based stability ana...
International Nuclear Information System (INIS)
Benasser Algehawi, Mohammed; Samsudin, Azman
2010-01-01
We present a method to extract key pairs needed for the Identity Based Encryption (IBE) scheme from extended Chebyshev polynomial over finite fields Z p . Our proposed scheme relies on the hard problem and the bilinear property of the extended Chebyshev polynomial over Z p . The proposed system is applicable, secure, and reliable.
Image based rendering of iterated function systems
Wijk, van J.J.; Saupe, D.
2004-01-01
A fast method to generate fractal imagery is presented. Iterated function systems (IFS) are based on repeatedly copying transformed images. We show that this can be directly translated into standard graphics operations: Each image is generated by texture mapping and blending copies of the previous
Directory of Open Access Journals (Sweden)
S. Vukotic
2016-08-01
Full Text Available Digital polynomial-based interpolation filters implemented using the Farrow structure are used in Digital Signal Processing (DSP to calculate the signal between its discrete samples. The two basic design parameters for these filters are number of polynomial-segments defining the finite length of impulse response, and order of polynomials in each polynomial segment. The complexity of the implementation structure and the frequency domain performance depend on these two parameters. This contribution presents estimation formulae for length and polynomial order of polynomial-based filters for various types of requirements including attenuation in stopband, width of transitions band, deviation in passband, weighting in passband/stopband.
DEFF Research Database (Denmark)
Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan
2018-01-01
This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...
Polynomial Vector Fields in One Complex Variable
DEFF Research Database (Denmark)
Branner, Bodil
In recent years Adrien Douady was interested in polynomial vector fields, both in relation to iteration theory and as a topic on their own. This talk is based on his work with Pierrette Sentenac, work of Xavier Buff and Tan Lei, and my own joint work with Kealey Dias.......In recent years Adrien Douady was interested in polynomial vector fields, both in relation to iteration theory and as a topic on their own. This talk is based on his work with Pierrette Sentenac, work of Xavier Buff and Tan Lei, and my own joint work with Kealey Dias....
Directory of Open Access Journals (Sweden)
Humin Lei
2017-01-01
Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.
A polynomial based model for cell fate prediction in human diseases.
Ma, Lichun; Zheng, Jie
2017-12-21
Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
Computing derivative-based global sensitivity measures using polynomial chaos expansions
International Nuclear Information System (INIS)
Sudret, B.; Mai, C.V.
2015-01-01
In the field of computer experiments sensitivity analysis aims at quantifying the relative importance of each input parameter (or combinations thereof) of a computational model with respect to the model output uncertainty. Variance decomposition methods leading to the well-known Sobol' indices are recognized as accurate techniques, at a rather high computational cost though. The use of polynomial chaos expansions (PCE) to compute Sobol' indices has allowed to alleviate the computational burden though. However, when dealing with large dimensional input vectors, it is good practice to first use screening methods in order to discard unimportant variables. The derivative-based global sensitivity measures (DGSMs) have been developed recently in this respect. In this paper we show how polynomial chaos expansions may be used to compute analytically DGSMs as a mere post-processing. This requires the analytical derivation of derivatives of the orthonormal polynomials which enter PC expansions. Closed-form expressions for Hermite, Legendre and Laguerre polynomial expansions are given. The efficiency of the approach is illustrated on two well-known benchmark problems in sensitivity analysis. - Highlights: • Derivative-based global sensitivity measures (DGSM) have been developed for screening purpose. • Polynomial chaos expansions (PC) are used as a surrogate model of the original computational model. • From a PC expansion the DGSM can be computed analytically. • The paper provides the derivatives of Hermite, Legendre and Laguerre polynomials for this purpose
A general U-block model-based design procedure for nonlinear polynomial control systems
Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua
2016-10-01
The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.
Iotti, Robert
2015-04-01
ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success
Solving the Rational Polynomial Coefficients Based on L Curve
Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.
2018-05-01
The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.
Directory of Open Access Journals (Sweden)
Arun Kaintura
2018-02-01
Full Text Available Advances in manufacturing process technology are key ensembles for the production of integrated circuits in the sub-micrometer region. It is of paramount importance to assess the effects of tolerances in the manufacturing process on the performance of modern integrated circuits. The polynomial chaos expansion has emerged as a suitable alternative to standard Monte Carlo-based methods that are accurate, but computationally cumbersome. This paper provides an overview of the most recent developments and challenges in the application of polynomial chaos-based techniques for uncertainty quantification in integrated circuits, with particular focus on high-dimensional problems.
International Nuclear Information System (INIS)
Feng Yi-Fu; Zhang Qing-Ling; Feng De-Zhi
2012-01-01
The global stability problem of Takagi—Sugeno (T—S) fuzzy Hopfield neural networks (FHNNs) with time delays is investigated. Novel LMI-based stability criteria are obtained by using Lyapunov functional theory to guarantee the asymptotic stability of the FHNNs with less conservatism. Firstly, using both Finsler's lemma and an improved homogeneous matrix polynomial technique, and applying an affine parameter-dependent Lyapunov—Krasovskii functional, we obtain the convergent LMI-based stability criteria. Algebraic properties of the fuzzy membership functions in the unit simplex are considered in the process of stability analysis via the homogeneous matrix polynomials technique. Secondly, to further reduce the conservatism, a new right-hand-side slack variables introducing technique is also proposed in terms of LMIs, which is suitable to the homogeneous matrix polynomials setting. Finally, two illustrative examples are given to show the efficiency of the proposed approaches
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Sparse grid-based polynomial chaos expansion for aerodynamics of an airfoil with uncertainties
Directory of Open Access Journals (Sweden)
Xiaojing WU
2018-05-01
Full Text Available The uncertainties can generate fluctuations with aerodynamic characteristics. Uncertainty Quantification (UQ is applied to compute its impact on the aerodynamic characteristics. In addition, the contribution of each uncertainty to aerodynamic characteristics should be computed by uncertainty sensitivity analysis. Non-Intrusive Polynomial Chaos (NIPC has been successfully applied to uncertainty quantification and uncertainty sensitivity analysis. However, the non-intrusive polynomial chaos method becomes inefficient as the number of random variables adopted to describe uncertainties increases. This deficiency becomes significant in stochastic aerodynamic analysis considering the geometric uncertainty because the description of geometric uncertainty generally needs many parameters. To solve the deficiency, a Sparse Grid-based Polynomial Chaos (SGPC expansion is used to do uncertainty quantification and sensitivity analysis for stochastic aerodynamic analysis considering geometric and operational uncertainties. It is proved that the method is more efficient than non-intrusive polynomial chaos and Monte Carlo Simulation (MSC method for the stochastic aerodynamic analysis. By uncertainty quantification, it can be learnt that the flow characteristics of shock wave and boundary layer separation are sensitive to the geometric uncertainty in transonic region. The uncertainty sensitivity analysis reveals the individual and coupled effects among the uncertainty parameters. Keywords: Non-intrusive polynomial chaos, Sparse grid, Stochastic aerodynamic analysis, Uncertainty sensitivity analysis, Uncertainty quantification
Adaptive method for multi-dimensional integration and selection of a base of chaos polynomials
International Nuclear Information System (INIS)
Crestaux, T.
2011-01-01
This research thesis addresses the propagation of uncertainty in numerical simulations and its processing within a probabilistic framework by a functional approach based on random variable functions. The author reports the use of the spectral method to represent random variables by development in polynomial chaos. More precisely, the author uses the method of non-intrusive projection which uses the orthogonality of Chaos Polynomials to compute the development coefficients by approximation of scalar products. The approach is applied to a cavity and to waste storage [fr
M-Polynomial and Degree-Based Topological Indices of Polyhex Nanotubes
Directory of Open Access Journals (Sweden)
Mobeen Munir
2016-12-01
Full Text Available The discovery of new nanomaterials adds new dimensions to industry, electronics, and pharmaceutical and biological therapeutics. In this article, we first find closed forms of M-polynomials of polyhex nanotubes. We also compute closed forms of various degree-based topological indices of these tubes. These indices are numerical tendencies that often depict quantitative structural activity/property/toxicity relationships and correlate certain physico-chemical properties, such as boiling point, stability, and strain energy, of respective nanomaterial. To conclude, we plot surfaces associated to M-polynomials and characterize some facts about these tubes.
A comparison of high-order polynomial and wave-based methods for Helmholtz problems
Lieu, Alice; Gabard, Gwénaël; Bériot, Hadrien
2016-09-01
The application of computational modelling to wave propagation problems is hindered by the dispersion error introduced by the discretisation. Two common strategies to address this issue are to use high-order polynomial shape functions (e.g. hp-FEM), or to use physics-based, or Trefftz, methods where the shape functions are local solutions of the problem (typically plane waves). Both strategies have been actively developed over the past decades and both have demonstrated their benefits compared to conventional finite-element methods, but they have yet to be compared. In this paper a high-order polynomial method (p-FEM with Lobatto polynomials) and the wave-based discontinuous Galerkin method are compared for two-dimensional Helmholtz problems. A number of different benchmark problems are used to perform a detailed and systematic assessment of the relative merits of these two methods in terms of interpolation properties, performance and conditioning. It is generally assumed that a wave-based method naturally provides better accuracy compared to polynomial methods since the plane waves or Bessel functions used in these methods are exact solutions of the Helmholtz equation. Results indicate that this expectation does not necessarily translate into a clear benefit, and that the differences in performance, accuracy and conditioning are more nuanced than generally assumed. The high-order polynomial method can in fact deliver comparable, and in some cases superior, performance compared to the wave-based DGM. In addition to benchmarking the intrinsic computational performance of these methods, a number of practical issues associated with realistic applications are also discussed.
Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials
Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong
2018-04-01
This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.
Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.
Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh
2015-01-01
In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.
A Fast lattice-based polynomial digital signature system for m-commerce
Wei, Xinzhou; Leung, Lin; Anshel, Michael
2003-01-01
The privacy and data integrity are not guaranteed in current wireless communications due to the security hole inside the Wireless Application Protocol (WAP) version 1.2 gateway. One of the remedies is to provide an end-to-end security in m-commerce by applying application level security on top of current WAP1.2. The traditional security technologies like RSA and ECC applied on enterprise's server are not practical for wireless devices because wireless devices have relatively weak computation power and limited memory compared with server. In this paper, we developed a lattice based polynomial digital signature system based on NTRU's Polynomial Authentication and Signature Scheme (PASS), which enabled the feasibility of applying high-level security on both server and wireless device sides.
Arun Kaintura; Tom Dhaene; Domenico Spina
2018-01-01
Advances in manufacturing process technology are key ensembles for the production of integrated circuits in the sub-micrometer region. It is of paramount importance to assess the effects of tolerances in the manufacturing process on the performance of modern integrated circuits. The polynomial chaos expansion has emerged as a suitable alternative to standard Monte Carlo-based methods that are accurate, but computationally cumbersome. This paper provides an overview of the most recent developm...
Freud, Géza
1971-01-01
Orthogonal Polynomials contains an up-to-date survey of the general theory of orthogonal polynomials. It deals with the problem of polynomials and reveals that the sequence of these polynomials forms an orthogonal system with respect to a non-negative m-distribution defined on the real numerical axis. Comprised of five chapters, the book begins with the fundamental properties of orthogonal polynomials. After discussing the momentum problem, it then explains the quadrature procedure, the convergence theory, and G. Szegő's theory. This book is useful for those who intend to use it as referenc
ISAR Imaging of Maneuvering Targets Based on the Modified Discrete Polynomial-Phase Transform
Directory of Open Access Journals (Sweden)
Yong Wang
2015-09-01
Full Text Available Inverse synthetic aperture radar (ISAR imaging of a maneuvering target is a challenging task in the field of radar signal processing. The azimuth echo can be characterized as a multi-component polynomial phase signal (PPS after the translational compensation, and the high quality ISAR images can be obtained by the parameters estimation of it combined with the Range-Instantaneous-Doppler (RID technique. In this paper, a novel parameters estimation algorithm of the multi-component PPS with order three (cubic phase signal-CPS based on the modified discrete polynomial-phase transform (MDPT is proposed, and the corresponding new ISAR imaging algorithm is presented consequently. This algorithm is efficient and accurate to generate a focused ISAR image, and the results of real data demonstrate the effectiveness of it.
Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles
2011-06-01
Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.
Sraj, Ihab
2016-08-26
The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference of the uncertain parameters is based on a Markov chain Monte Carlo (MCMC) scheme that utilizes a newly formulated test statistic taking into account the different components representing the structures of turbulent mixing on both daily and seasonal time scales in addition to the data quality, and filters for the effects of parameter perturbations over those as a result of changes in the wind. To avoid the prohibitive computational cost of integrating the MITgcm model at each MCMC iteration, a surrogate model for the test statistic using the PC method is built. Because of the noise in the model predictions, a basis-pursuit-denoising (BPDN) compressed sensing approach is employed to determine the PC coefficients of a representative surrogate model. The PC surrogate is then used to evaluate the test statistic in the MCMC step for sampling the posterior of the uncertain parameters. Results of the posteriors indicate good agreement with the default values for two parameters of the KPP model, namely the critical bulk and gradient Richardson numbers; while the posteriors of the remaining parameters were barely informative. © 2016 American Meteorological Society.
Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim
2016-01-01
The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference
Mahalanobis Distance Based Iterative Closest Point
DEFF Research Database (Denmark)
Hansen, Mads Fogtmann; Blas, Morten Rufus; Larsen, Rasmus
2007-01-01
the notion of a mahalanobis distance map upon a point set with associated covariance matrices which in addition to providing correlation weighted distance implicitly provides a method for assigning correspondence during alignment. This distance map provides an easy formulation of the ICP problem that permits...... a fast optimization. Initially, the covariance matrices are set to the identity matrix, and all shapes are aligned to a randomly selected shape (equivalent to standard ICP). From this point the algorithm iterates between the steps: (a) obtain mean shape and new estimates of the covariance matrices from...... the aligned shapes, (b) align shapes to the mean shape. Three different methods for estimating the mean shape with associated covariance matrices are explored in the paper. The proposed methods are validated experimentally on two separate datasets (IMM face dataset and femur-bones). The superiority of ICP...
Zernike polynomial based Rayleigh-Ritz model of a piezoelectric unimorph deformable mirror
CSIR Research Space (South Africa)
Long, CS
2012-04-01
Full Text Available , are routinely and conveniently described using Zernike polynomials. A Rayleigh-Ritz structural model, which uses Zernike polynomials directly to describe the displacements, is proposed in this paper. The proposed formulation produces a numerically inexpensive...
A new subspace based approach to iterative learning control
Nijsse, G.; Verhaegen, M.; Doelman, N.J.
2001-01-01
This paper1 presents an iterative learning control (ILC) procedure based on an inverse model of the plant under control. Our first contribution is that we formulate the inversion procedure as a Kalman smoothing problem: based on a compact state space model of a possibly non-minimum phase system,
Bayesian inference of earthquake parameters from buoy data using a polynomial chaos-based surrogate
Giraldi, Loic
2017-04-07
This work addresses the estimation of the parameters of an earthquake model by the consequent tsunami, with an application to the Chile 2010 event. We are particularly interested in the Bayesian inference of the location, the orientation, and the slip of an Okada-based model of the earthquake ocean floor displacement. The tsunami numerical model is based on the GeoClaw software while the observational data is provided by a single DARTⓇ buoy. We propose in this paper a methodology based on polynomial chaos expansion to construct a surrogate model of the wave height at the buoy location. A correlated noise model is first proposed in order to represent the discrepancy between the computational model and the data. This step is necessary, as a classical independent Gaussian noise is shown to be unsuitable for modeling the error, and to prevent convergence of the Markov Chain Monte Carlo sampler. Second, the polynomial chaos model is subsequently improved to handle the variability of the arrival time of the wave, using a preconditioned non-intrusive spectral method. Finally, the construction of a reduced model dedicated to Bayesian inference is proposed. Numerical results are presented and discussed.
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
Lam, Hak-Keung
2016-01-01
This book presents recent research on the stability analysis of polynomial-fuzzy-model-based control systems where the concept of partially/imperfectly matched premises and membership-function dependent analysis are considered. The membership-function-dependent analysis offers a new research direction for fuzzy-model-based control systems by taking into account the characteristic and information of the membership functions in the stability analysis. The book presents on a research level the most recent and advanced research results, promotes the research of polynomial-fuzzy-model-based control systems, and provides theoretical support and point a research direction to postgraduate students and fellow researchers. Each chapter provides numerical examples to verify the analysis results, demonstrate the effectiveness of the proposed polynomial fuzzy control schemes, and explain the design procedure. The book is comprehensively written enclosing detailed derivation steps and mathematical derivations also for read...
Complex Polynomial Vector Fields
DEFF Research Database (Denmark)
The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...... or meromorphic (allowing poles as singularities) functions. There already exists a well-developed theory for iterative holomorphic dynamical systems, and successful relations found between iteration theory and flows of vector fields have been one of the main motivations for the recent interest in holomorphic...... vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...
Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M
2018-06-01
To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Sum-of-squares based observer design for polynomial systems with a known fixed time delay
Czech Academy of Sciences Publication Activity Database
Rehák, Branislav
2015-01-01
Roč. 51, č. 5 (2015), s. 858-873 ISSN 0023-5954 R&D Projects: GA ČR GA13-02149S Institutional support: RVO:67985556 Keywords : sum-of-squares polynomial * observer * polynomial system Subject RIV: BC - Control Systems Theory Impact factor: 0.628, year: 2015 http://www.kybernetika.cz/content/2015/5/856
Beheshti, Alireza
2018-03-01
The contribution addresses the finite element analysis of bending of plates given the Kirchhoff-Love model. To analyze the static deformation of plates with different loadings and geometries, the principle of virtual work is used to extract the weak form. Following deriving the strain field, stresses and resultants may be obtained. For constructing four-node quadrilateral plate elements, the Hermite polynomials defined with respect to the variables in the parent space are applied explicitly. Based on the approximated field of displacement, the stiffness matrix and the load vector in the finite element method are obtained. To demonstrate the performance of the subparametric 4-node plate elements, some known, classical examples in structural mechanics are solved and there are comparisons with the analytical solutions available in the literature.
Krishnamoorthi, R; Anna Poorani, G
2016-01-01
Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.
Golden, Ryan; Cho, Ilwoo
2015-01-01
In this paper, we study structure theorems of algebras of symmetric functions. Based on a certain relation on elementary symmetric polynomials generating such algebras, we consider perturbation in the algebras. In particular, we understand generators of the algebras as perturbations. From such perturbations, define injective maps on generators, which induce algebra-monomorphisms (or embeddings) on the algebras. They provide inductive structure theorems on algebras of symmetric polynomials. As...
Directory of Open Access Journals (Sweden)
Liyun Su
2012-01-01
Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.
Complex Polynomial Vector Fields
DEFF Research Database (Denmark)
Dias, Kealey
vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...... of parameter spaces into structurally stable domains, and a description of the bifurcations. For this reason, the talk will focus on these questions for complex polynomial vector fields.......The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...
A novel block cryptosystem based on iterating a chaotic map
International Nuclear Information System (INIS)
Xiang Tao; Liao Xiaofeng; Tang Guoping; Chen Yong; Wong, Kwok-wo
2006-01-01
A block cryptographic scheme based on iterating a chaotic map is proposed. With random binary sequences generated from the real-valued chaotic map, the plaintext block is permuted by a key-dependent shift approach and then encrypted by the classical chaotic masking technique. Simulation results show that performance and security of the proposed cryptographic scheme are better than those of existing algorithms. Advantages and security of our scheme are also discussed in detail
Pipeline Processing with an Iterative, Context-Based Detection Model
2016-01-22
wave precursor artifacts. Distortion definitely is reduced with the addition of more channels to the processed data stream (comparing trace 3 to...limitations of fully automatic hypothesis evaluation with a test case of two events in Central Asia – a deep Hindu Kush earthquake and a shallow earthquake in...AFRL-RV-PS- AFRL-RV-PS- TR-2016-0080 TR-2016-0080 PIPELINE PROCESSING WITH AN ITERATIVE, CONTEXT-BASED DETECTION MODEL T. Kværna, et al
Fast Multi-Symbol Based Iterative Detectors for UWB Communications
Directory of Open Access Journals (Sweden)
Lottici Vincenzo
2010-01-01
Full Text Available Ultra-wideband (UWB impulse radios have shown great potential in wireless local area networks for localization, coexistence with other services, and low probability of interception and detection. However, low transmission power and high multipath effect make the detection of UWB signals challenging. Recently, multi-symbol based detection has caught attention for UWB communications because it provides good performance and does not require explicit channel estimation. Most of the existing multi-symbol based methods incur a higher computational cost than can be afforded in the envisioned UWB systems. In this paper, we propose an iterative multi-symbol based method that has low complexity and provides near optimal performance. Our method uses only one initial symbol to start and applies a decision directed approach to iteratively update a filter template and information symbols. Simulations show that our method converges in only a few iterations (less than 5, and that when the number of symbols increases, the performance of our method approaches that of the ideal Rake receiver.
Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Variable aperture-based ptychographical iterative engine method
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Nested polynomial trends for the improvement of Gaussian process-based predictors
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
Non-linear triangle-based polynomial expansion nodal method for hexagonal core analysis
International Nuclear Information System (INIS)
Cho, Jin Young; Cho, Byung Oh; Joo, Han Gyu; Zee, Sung Qunn; Park, Sang Yong
2000-09-01
This report is for the implementation of triangle-based polynomial expansion nodal (TPEN) method to MASTER code in conjunction with the coarse mesh finite difference(CMFD) framework for hexagonal core design and analysis. The TPEN method is a variation of the higher order polynomial expansion nodal (HOPEN) method that solves the multi-group neutron diffusion equation in the hexagonal-z geometry. In contrast with the HOPEN method, only two-dimensional intranodal expansion is considered in the TPEN method for a triangular domain. The axial dependence of the intranodal flux is incorporated separately here and it is determined by the nodal expansion method (NEM) for a hexagonal node. For the consistency of node geometry of the MASTER code which is based on hexagon, TPEN solver is coded to solve one hexagonal node which is composed of 6 triangular nodes directly with Gauss elimination scheme. To solve the CMFD linear system efficiently, stabilized bi-conjugate gradient(BiCG) algorithm and Wielandt eigenvalue shift method are adopted. And for the construction of the efficient preconditioner of BiCG algorithm, the incomplete LU(ILU) factorization scheme which has been widely used in two-dimensional problems is used. To apply the ILU factorization scheme to three-dimensional problem, a symmetric Gauss-Seidel Factorization scheme is used. In order to examine the accuracy of the TPEN solution, several eigenvalue benchmark problems and two transient problems, i.e., a realistic VVER1000 and VVER440 rod ejection benchmark problems, were solved and compared with respective references. The results of eigenvalue benchmark problems indicate that non-linear TPEN method is very accurate showing less than 15 pcm of eigenvalue errors and 1% of maximum power errors, and fast enough to solve the three-dimensional VVER-440 problem within 5 seconds on 733MHz PENTIUM-III. In the case of the transient problems, the non-linear TPEN method also shows good results within a few minute of
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea
2018-02-01
The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.
Iterative Decoding for an Optical CDMA based Laser communication System
International Nuclear Information System (INIS)
Kim, Jin Young; Kim, Eun Cheol; Cha, Jae Sang
2008-01-01
An optical CDMA(code division multiple access)based Laser communication system has attracted much attention since it requires minimal optical Laser signal processing and it is virtually delay free, while from the theoretical point of view, its performance depends on the auto and cross correlation properties of employed sequences. Various kinds of channel coding schemes for optical CDMA based Laser communication systems have been proposed and analyzed to compensate nonideal channel and receiver conditions in impaired photon channels. In this paper, we propose and analyze an iterative decoding of optical CDMA based Laser communication signals for both shot noise limited and thermal noise limited systems. It is assumed that optical channel is an intensity modulated (IM)channel and direct detection scheme is employed to detect the received optical signal. The performance is evaluated in terms of bit error probability and throughput. It is demonstrated that the BER and throughput performance is substantially improved with interleaver length for a fixed code rate and with alphabet size of PPM (pulse position modulation). Also, the BER and throughput performance is significantly enhanced with the number of iterations for decoding process. The results in this paper can be applied to the optical CDMA based Laser communication network with multiple access applications
Iterative Decoding for an Optical CDMA based Laser communication System
Energy Technology Data Exchange (ETDEWEB)
Kim, Jin Young; Kim, Eun Cheol [Kwangwoon Univ., Seoul (Korea, Republic of); Cha, Jae Sang [Seoul National Univ. of Technology, Seoul (Korea, Republic of)
2008-11-15
An optical CDMA(code division multiple access)based Laser communication system has attracted much attention since it requires minimal optical Laser signal processing and it is virtually delay free, while from the theoretical point of view, its performance depends on the auto and cross correlation properties of employed sequences. Various kinds of channel coding schemes for optical CDMA based Laser communication systems have been proposed and analyzed to compensate nonideal channel and receiver conditions in impaired photon channels. In this paper, we propose and analyze an iterative decoding of optical CDMA based Laser communication signals for both shot noise limited and thermal noise limited systems. It is assumed that optical channel is an intensity modulated (IM)channel and direct detection scheme is employed to detect the received optical signal. The performance is evaluated in terms of bit error probability and throughput. It is demonstrated that the BER and throughput performance is substantially improved with interleaver length for a fixed code rate and with alphabet size of PPM (pulse position modulation). Also, the BER and throughput performance is significantly enhanced with the number of iterations for decoding process. The results in this paper can be applied to the optical CDMA based Laser communication network with multiple access applications.
ITER Fast Plant System Controller prototype based on PXIe platform
International Nuclear Information System (INIS)
Ruiz, M.; Vega, J.; Castro, R.; Sanz, D.; López, J.M.; Arcas, G. de; Barrera, E.; Nieto, J.; Gonçalves, B.; Sousa, J.; Carvalho, B.; Utzel, N.; Makijarvi, P.
2012-01-01
Highlights: ► Implementation of Fast Plant System Controller (FPSC) for ITER CODAC. ► Efficient data acquisition and data movement using EPICS. ► Performance of PCIe technologies in the implementation of FPSC. - Abstract: The ITER Fast Plant System Controller (FPSC) is based on embedded technologies. The FPSC will be devoted to both data acquisition tasks (sampling rates higher than 1 kHz) and control purposes (feedback loop actuators). Some of the essential requirements of these systems are: (a) data acquisition and data preprocessing; (b) interfacing with different networks and high speed links (Plant Operation Network, timing network based on IEEE1588, synchronous data transference and streaming/archiving networks); and (c) system setup and operation using EPICS (Experimental Physics and Industrial Control System) process variables. CIEMAT and UPM have implemented a prototype of FPSC using a PXIe (PCI eXtension for Instrumentation) form factor in a R and D project developed in two phases. The paper presents the main features of the two prototypes developed that have been named alpha and beta. The former was implemented using LabVIEW development tools as it was focused on modeling the FPSC software modules, using the graphical features of LabVIEW applications, and measuring the basic performance in the system. The alpha version prototype implements data acquisition with time-stamping, EPICS monitoring using waveform process variables (PVs), and archiving. The beta version prototype is a complete IOC implemented using EPICS with different software functional blocks. These functional blocks are integrated and managed using an ASYN driver solution and provide the basic functionalities required by ITER FPSC such as data acquisition, data archiving, data pre-processing (using both CPU and GPU) and streaming.
A neutron spectrum unfolding code based on iterative procedures
International Nuclear Information System (INIS)
Ortiz R, J. M.; Vega C, H. R.
2012-10-01
In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a 6 Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a 241 AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)
Linear precoding based on polynomial expansion: reducing complexity in massive MIMO
Mueller, Axel; Kammoun, Abla; Bjö rnson, Emil; Debbah, Mé rouane
2016-01-01
By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.
A Synoptic of Software Implementation for Shift Registers Based on 16th Degree Primitive Polynomials
Directory of Open Access Journals (Sweden)
Mirella Amelia Mioc
2016-08-01
Full Text Available Almost all of the major applications in the specific Fields of Communication used a well-known device called Linear Feedback Shift Register. Usually LFSR functions in a Galois Field GF(2n, meaning that all the operations are done with arithmetic modulo n degree Irreducible and especially Primitive Polynomials. Storing data in Galois Fields allows effective and manageable manipulation, mainly in computer cryptographic applications. The analysis of functioning for Primitive Polynomials of 16th degree shows that almost all the obtained results are in the same time distribution.
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Directory of Open Access Journals (Sweden)
Anthony Papadopoulos
Full Text Available The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Directory of Open Access Journals (Sweden)
Yu-Bo Jiao
2015-01-01
Full Text Available The paper presents an effective approach for damage identification of bridge based on Chebyshev polynomial fitting and fuzzy logic systems without considering baseline model data. The modal curvature of damaged bridge can be obtained through central difference approximation based on displacement modal shape. Depending on the modal curvature of damaged structure, Chebyshev polynomial fitting is applied to acquire the curvature of undamaged one without considering baseline parameters. Therefore, modal curvature difference can be derived and used for damage localizing. Subsequently, the normalized modal curvature difference is treated as input variable of fuzzy logic systems for damage condition assessment. Numerical simulation on a simply supported bridge was carried out to demonstrate the feasibility of the proposed method.
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-01-01
an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model
de Klerk, Etienne; Laurent, Monique
We consider the problem of minimizing a continuous function f over a compact set K. We compare the hierarchy of upper bounds proposed by Lasserre in [SIAM J. Optim. 21(3) (2011), pp. 864-885] to bounds that may be obtained from simulated annealing. We show that, when f is a polynomial and K a convex
de Klerk, Etienne; Laurent, Monique; Sun, Zhao
We consider the problem of minimizing a continuous function f over a compact set K. We analyze a hierarchy of upper bounds proposed by Lasserre (SIAM J Optim 21(3):864–885, 2011), obtained by searching for an optimal probability density function h on K which is a sum of squares of polynomials, so
Testing reachability and stabilizability of systems over polynomial rings using Gröbner bases
Habets, L.C.G.J.M.
1993-01-01
Conditions for the reachability and stabilizability of systems over polynomial rings are well-known in the literature. For a system $ \\Sigma = (A,B)$ they can be expressed as right-invertibility cconditions on the matrix $(zI - A \\mid B)$. Therefore there is quite a strong algebraic relationship
Bayesian inference of earthquake parameters from buoy data using a polynomial chaos-based surrogate
Giraldi, Loic; Le Maî tre, Olivier P.; Mandli, Kyle T.; Dawson, Clint N.; Hoteit, Ibrahim; Knio, Omar
2017-01-01
on polynomial chaos expansion to construct a surrogate model of the wave height at the buoy location. A correlated noise model is first proposed in order to represent the discrepancy between the computational model and the data. This step is necessary, as a
A reachability test for systems over polynomial rings using Gröbner bases
Habets, L.C.G.J.M.
1992-01-01
Conditions for the reachability of a system over a polynomial ring are well known in the literature. However, the verification of these conditions remained a difficult problem in general. Application of the Gröbner Basis method from constructive commutative algebra makes it possible to carry out
Directory of Open Access Journals (Sweden)
A.K. Parida
2016-09-01
Full Text Available In this paper Chebyshev polynomial functions based locally recurrent neuro-fuzzy information system is presented for the prediction and analysis of financial and electrical energy market data. The normally used TSK-type feedforward fuzzy neural network is unable to take the full advantage of the use of the linear fuzzy rule base in accurate input–output mapping and hence the consequent part of the rule base is made nonlinear using polynomial or arithmetic basis functions. Further the Chebyshev polynomial functions provide an expanded nonlinear transformation to the input space thereby increasing its dimension for capturing the nonlinearities and chaotic variations in financial or energy market data streams. Also the locally recurrent neuro-fuzzy information system (LRNFIS includes feedback loops both at the firing strength layer and the output layer to allow signal flow both in forward and backward directions, thereby making the LRNFIS mimic a dynamic system that provides fast convergence and accuracy in predicting time series fluctuations. Instead of using forward and backward least mean square (FBLMS learning algorithm, an improved Firefly-Harmony search (IFFHS learning algorithm is used to estimate the parameters of the consequent part and feedback loop parameters for better stability and convergence. Several real world financial and energy market time series databases are used for performance validation of the proposed LRNFIS model.
CAD-Based Shielding Analysis for ITER Port Diagnostics
Directory of Open Access Journals (Sweden)
Serikov Arkady
2017-01-01
Full Text Available Radiation shielding analysis conducted in support of design development of the contemporary diagnostic systems integrated inside the ITER ports is relied on the use of CAD models. This paper presents the CAD-based MCNP Monte Carlo radiation transport and activation analyses for the Diagnostic Upper and Equatorial Port Plugs (UPP #3 and EPP #8, #17. The creation process of the complicated 3D MCNP models of the diagnostics systems was substantially accelerated by application of the CAD-to-MCNP converter programs MCAM and McCad. High performance computing resources of the Helios supercomputer allowed to speed-up the MCNP parallel transport calculations with the MPI/OpenMP interface. The found shielding solutions could be universal, reducing ports R&D costs. The shield block behind the Tritium and Deposit Monitor (TDM optical box was added to study its influence on Shut-Down Dose Rate (SDDR in Port Interspace (PI of EPP#17. Influence of neutron streaming along the Lost Alpha Monitor (LAM on the neutron energy spectra calculated in the Tangential Neutron Spectrometer (TNS of EPP#8. For the UPP#3 with Charge eXchange Recombination Spectroscopy (CXRS-core, an excessive neutron streaming along the CXRS shutter, which should be prevented in further design iteration.
CAD-Based Shielding Analysis for ITER Port Diagnostics
Serikov, Arkady; Fischer, Ulrich; Anthoine, David; Bertalot, Luciano; De Bock, Maartin; O'Connor, Richard; Juarez, Rafael; Krasilnikov, Vitaly
2017-09-01
Radiation shielding analysis conducted in support of design development of the contemporary diagnostic systems integrated inside the ITER ports is relied on the use of CAD models. This paper presents the CAD-based MCNP Monte Carlo radiation transport and activation analyses for the Diagnostic Upper and Equatorial Port Plugs (UPP #3 and EPP #8, #17). The creation process of the complicated 3D MCNP models of the diagnostics systems was substantially accelerated by application of the CAD-to-MCNP converter programs MCAM and McCad. High performance computing resources of the Helios supercomputer allowed to speed-up the MCNP parallel transport calculations with the MPI/OpenMP interface. The found shielding solutions could be universal, reducing ports R&D costs. The shield block behind the Tritium and Deposit Monitor (TDM) optical box was added to study its influence on Shut-Down Dose Rate (SDDR) in Port Interspace (PI) of EPP#17. Influence of neutron streaming along the Lost Alpha Monitor (LAM) on the neutron energy spectra calculated in the Tangential Neutron Spectrometer (TNS) of EPP#8. For the UPP#3 with Charge eXchange Recombination Spectroscopy (CXRS-core), an excessive neutron streaming along the CXRS shutter, which should be prevented in further design iteration.
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
Retinal biometrics based on Iterative Closest Point algorithm.
Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi
2017-07-01
The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.
A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation
Oruç, Ömer
2018-04-01
In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.
Directory of Open Access Journals (Sweden)
Chi Yaodan
2017-08-01
Full Text Available Crosstalk in wiring harness has been studied extensively for its importance in the naval ships electromagnetic compatibility field. An effective and high-efficiency method is proposed in this paper for analyzing Statistical Characteristics of crosstalk in wiring harness with random variation of position based on Polynomial Chaos Expansion (PCE. A typical 14-cable wiring harness was simulated as the object of research. Distance among interfering cable, affected cable and GND is synthesized and analyzed in both frequency domain and time domain. The model of naval ships wiring harness distribution parameter was established by utilizing Legendre orthogonal polynomials as basis functions along with prediction model of statistical characters. Detailed mean value, mean square error, probability density function and reasonable varying range of crosstalk in naval ships wiring harness are described in both time domain and frequency domain. Numerical experiment proves that the method proposed in this paper, not only has good consistency with the MC method can be applied in the naval ships EMC research field to provide theoretical support for guaranteeing safety, but also has better time-efficiency than the MC method. Therefore, the Polynomial Chaos Expansion method.
All-Pole Recursive Digital Filters Design Based on Ultraspherical Polynomials
N. Stojanovic; N. Stamenkovic; V. Stojanovic
2014-01-01
A simple method for approximation of all-pole recursive digital filters, directly in digital domain, is described. Transfer function of these filters, referred to as Ultraspherical filters, is controlled by order of the Ultraspherical polynomial, nu. Parameter nu, restricted to be a nonnegative real number (nu ≥ 0), controls ripple peaks in the passband of the magnitude response and enables a trade-off between the passband loss and the group delay response of the resulting filter. Chebyshev f...
Linear precoding based on polynomial expansion: reducing complexity in massive MIMO
Mueller, Axel
2016-02-29
Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively “antenna-efficient” regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.
Cryptanalysis of an Iterated Halving-based hash function: CRUSH
DEFF Research Database (Denmark)
Bagheri, Nasour; Henricksen, Matt; Knudsen, Lars Ramkilde
2009-01-01
Iterated Halving has been suggested as a replacement to the Merkle–Damgård (MD) construction in 2004 anticipating the attacks on the MDx family of hash functions. The CRUSH hash function provides a specific instantiation of the block cipher for Iterated Halving. The authors identify structural pr...
A Gradient Based Iterative Solutions for Sylvester Tensor Equations
Directory of Open Access Journals (Sweden)
Zhen Chen
2013-01-01
proposed by Ding and Chen, 2005, and by using tensor arithmetic concepts, an iterative algorithm and its modification are established to solve the Sylvester tensor equation. Convergence analysis indicates that the iterative solutions always converge to the exact solution for arbitrary initial value. Finally, some examples are provided to show that the proposed algorithms are effective.
Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.
International Nuclear Information System (INIS)
Beauwens, B.; Arkuszewski, J.; Boryszewicz, M.
1981-01-01
Results obtained in the field of linear iterative methods within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations are summarized. The general convergence theory of linear iterative methods is essentially based on the properties of nonnegative operators on ordered normed spaces. The following aspects of this theory have been improved: new comparison theorems for regular splittings, generalization of the notions of M- and H-matrices, new interpretations of classical convergence theorems for positive-definite operators. The estimation of asymptotic convergence rates was developed with two purposes: the analysis of model problems and the optimization of relaxation parameters. In the framework of factorization iterative methods, model problem analysis is needed to investigate whether the increased computational complexity of higher-order methods does not offset their increased asymptotic convergence rates, as well as to appreciate the effect of standard relaxation techniques (polynomial relaxation). On the other hand, the optimal use of factorization iterative methods requires the development of adequate relaxation techniques and their optimization. The relative performances of a few possibilities have been explored for model problems. Presently, the best results have been obtained with optimal diagonal-Chebyshev relaxation
3D dictionary learning based iterative cone beam CT reconstruction
Directory of Open Access Journals (Sweden)
Ti Bai
2014-03-01
Full Text Available Purpose: This work is to develop a 3D dictionary learning based cone beam CT (CBCT reconstruction algorithm on graphic processing units (GPU to improve the quality of sparse-view CBCT reconstruction with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms of 3 × 3 × 3 was trained from a large number of blocks extracted from a high quality volume image. On the basis, we utilized cholesky decomposition based orthogonal matching pursuit algorithm to find the sparse representation of each block. To accelerate the time-consuming sparse coding in the 3D case, we implemented the sparse coding in a parallel fashion by taking advantage of the tremendous computational power of GPU. Conjugate gradient least square algorithm was adopted to minimize the data fidelity term. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with tight frame (TF by performing reconstructions on a subset data of 121 projections. Results: Compared to TF based CBCT reconstruction that shows good overall performance, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, remove more streaking artifacts and also induce less blocky artifacts. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppress the noise, and hence to achieve high quality reconstruction under the case of sparse view. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application.-------------------------------Cite this article as: Bai T, Yan H, Shi F, Jia X, Lou Y, Xu Q, Jiang S, Mou X. 3D dictionary learning based iterative cone beam CT reconstruction. Int J Cancer Ther Oncol 2014; 2(2:020240. DOI: 10
An assessment of the base blanket for ITER
International Nuclear Information System (INIS)
Raffray, A.R.; Abdou, M.A.; Ying, A.
1991-01-01
Ideally, the ITER base blanket would provide the necessary tritium for the reactor to be self-sufficient during operation, while having minimal impact on the overall reactor cost, reliability and safety. A solid breeder blanket has been developed in CDA phase in an attempt to achieve such objectives. The reference solid breeder base blanket configurations at the end of the CDA phase has many attractive features such as a tritium breeding ratio (TBR) of 0.8--0.9 and a reasonably low tritium inventory. However, some concerns regarding the risk, cost and benefit of the base blanket have been raised. These include uncertainties associated with the solid breeder thermal control and the potentially high cost of the amount of Be used to achieve high TBR and to provide the necessary thermal barrier between the high temperature solid breeder and low temperature coolant. This work addresses these concerns. The basis for the selection of a breeding blanket is first discussed in light of the incremental risk, cost and benefits relative to a non-breeding blanket. Key issues associated with the CDA breeding blanket configurations are then analyzed. Finally, alternative schemes that could enhance the attractiveness and flexibility of a breeding blanket are explored
Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....
Analysis of fractional non-linear diffusion behaviors based on Adomian polynomials
Directory of Open Access Journals (Sweden)
Wu Guo-Cheng
2017-01-01
Full Text Available A time-fractional non-linear diffusion equation of two orders is considered to investigate strong non-linearity through porous media. An equivalent integral equation is established and Adomian polynomials are adopted to linearize non-linear terms. With the Taylor expansion of fractional order, recurrence formulae are proposed and novel numerical solutions are obtained to depict the diffusion behaviors more accurately. The result shows that the method is suitable for numerical simulation of the fractional diffusion equations of multi-orders.
Model-based normalization for iterative 3D PET image
International Nuclear Information System (INIS)
Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.
2002-01-01
We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)
An Iterative Load Disaggregation Approach Based on Appliance Consumption Pattern
Directory of Open Access Journals (Sweden)
Huijuan Wang
2018-04-01
Full Text Available Non-intrusive load monitoring (NILM, monitoring single-appliance consumption level by decomposing the aggregated energy consumption, is a novel and economic technology that is beneficial to energy utilities and energy demand management strategies development. Hardware costs of high-frequency sampling and algorithm’s computational complexity hampered NILM large-scale application. However, low sampling data shows poor performance in event detection when multiple appliances are simultaneously turned on. In this paper, we contribute an iterative disaggregation approach that is based on appliance consumption pattern (ILDACP. Our approach combined Fuzzy C-means clustering algorithm, which provide an initial appliance operating status, and sub-sequence searching Dynamic Time Warping, which retrieves single energy consumption based on the typical power consumption pattern. Results show that the proposed approach is effective to accurately disaggregate power consumption, and is suitable for the situation where different appliances are simultaneously operated. Also, the approach has lower computational complexity than Hidden Markov Model method and it is easy to implement in the household without installing special equipment.
Robust Adaptive LCMV Beamformer Based On An Iterative Suboptimal Solution
Directory of Open Access Journals (Sweden)
Xiansheng Guo
2015-06-01
Full Text Available The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV beamformer is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR, low jammer-to-noise ratios (JNRs and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV using conjugate gradient (CG optimization method. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Fink, Wolfgang; Micol, Daniel
2006-01-01
We describe a computer eye model that allows for aspheric surfaces and a three-dimensional computer-based ray-tracing technique to simulate optical properties of the human eye and visual perception under various eye defects. Eye surfaces, such as the cornea, eye lens, and retina, are modeled or approximated by a set of Zernike polynomials that are fitted to input data for the respective surfaces. A ray-tracing procedure propagates light rays using Snell’s law of refraction from an input objec...
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo
Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao
2017-10-01
Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.
Parallelization of the model-based iterative reconstruction algorithm DIRA
International Nuclear Information System (INIS)
Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.
2016-01-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)
Prospective iterative trial of proteasome inhibitor-based desensitization.
Woodle, E S; Shields, A R; Ejaz, N S; Sadaka, B; Girnita, A; Walsh, R C; Alloway, R R; Brailey, P; Cardi, M A; Abu Jawdeh, B G; Roy-Chaudhury, P; Govil, A; Mogilishetty, G
2015-01-01
A prospective iterative trial of proteasome inhibitor (PI)-based therapy for reducing HLA antibody (Ab) levels was conducted in five phases differing in bortezomib dosing density and plasmapheresis timing. Phases included 1 or 2 bortezomib cycles (1.3 mg/m(2) × 6-8 doses), one rituximab dose and plasmapheresis. HLA Abs were measured by solid phase and flow cytometry (FCM) assays. Immunodominant Ab (iAb) was defined as highest HLA Ab level. Forty-four patients received 52 desensitization courses (7 patients enrolled in multiple phases): Phase 1 (n = 20), Phase 2 (n = 12), Phase 3 (n = 10), Phase 4 (n = 5), Phase 5 (n = 5). iAb reductions were observed in 38 of 44 (86%) patients and persisted up to 10 months. In Phase 1, a 51.5% iAb reduction was observed at 28 days with bortezomib alone. iAb reductions increased with higher bortezomib dosing densities and included class I, II, and public antigens (HLA DRβ3, HLA DRβ4 and HLA DRβ5). FCM median channel shifts decreased in 11/11 (100%) patients by a mean of 103 ± 54 mean channel shifts (log scale). Nineteen out of 44 patients (43.2%) were transplanted with low acute rejection rates (18.8%) and de novo DSA formation (12.5%). In conclusion, PI-based desensitization consistently and durably reduces HLA Ab levels providing an alternative to intravenous immune globulin-based desensitization. © Copyright 2014 The American Society of Transplantation and the American Society of Transplant Surgeons.
New concurrent iterative methods with monotonic convergence
Energy Technology Data Exchange (ETDEWEB)
Yao, Qingchuan [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.
All-Pole Recursive Digital Filters Design Based on Ultraspherical Polynomials
Directory of Open Access Journals (Sweden)
N. Stojanovic
2014-09-01
Full Text Available A simple method for approximation of all-pole recursive digital filters, directly in digital domain, is described. Transfer function of these filters, referred to as Ultraspherical filters, is controlled by order of the Ultraspherical polynomial, nu. Parameter nu, restricted to be a nonnegative real number (nu ≥ 0, controls ripple peaks in the passband of the magnitude response and enables a trade-off between the passband loss and the group delay response of the resulting filter. Chebyshev filters of the first and of the second kind, and also Legendre and Butterworth filters are shown to be special cases of these allpole recursive digital filters. Closed form equations for the computation of the filter coefficients are provided. The design technique is illustrated with examples.
Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques
Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.
2015-01-01
AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were
Directory of Open Access Journals (Sweden)
Juan Carlos Figueroa García
2011-12-01
The presented approach uses an iterative algorithm which finds stable solutions to problems with fuzzy parameter sinboth sides of an FLP problem. The algorithm is based on the soft constraints method proposed by Zimmermann combined with an iterative procedure which gets a single optimal solution.
Oxidation of carbon based first wall materials of ITER
International Nuclear Information System (INIS)
Moormann, R.R.M.; Hinssen, H.K.; Wu, C.H.
2001-01-01
The safety relevance of oxidation reactions on carbon materials in fusion reactors is discussed. Because tritium codeposited in ITER will probably exceed tolerable limits, countermeasures have to be developed: In this paper ozone is tested as oxidising agent for removal of codeposited layers on thick a-C:D-flakes from TEXTOR. In preceeding experiments the advantageous features of using ozonised air instead of ozonised oxygen, reported in literature for reactions with graphite, is not found for nuclear grade graphite. At 185 deg. C = 458 K ozone (0.8-3.4 vol-% in oxygen) is able to gasify the carbon content of these flakes with initial rates, comparable to initial rates in oxygen (21 kPa) for the same material at >200K higher temperatures. The layer reduction rate in ozone drops with increasing burn-off rapidly from about 0.9-2.0 μm/h to 0.20-0.25 μm/h, but in oxygen it drops to zero for all temperatures ≤ 450 deg. C = 723 K, before carbon is completely gasified. Altogether, ozone seems to be a promising oxidising agent for removal of codeposited layers, but further studies are necessary with respect to rate dependence on temperature and ozone concentration even on other kinds of codeposited layers. Further on, the optimum reaction temperature considering the limited thermal stability of ozone has to be found out and studies on the general reaction mechanism have to be done. Besides these examinations on codeposited layers, a short overview on the status of our oxidation studies on different types of fusion relevant C-based materials is given; open problems in this field are outlined. (author)
Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.
2016-01-01
In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.
Directory of Open Access Journals (Sweden)
Tianjin Huang
2017-08-01
Full Text Available We present in this paper a polynomial fitting method applicable to segments of footprints measured by the Geoscience Laser Altimeter System (GLAS to estimate glacier thickness change. Our modification makes the method applicable to complex topography, such as a large mountain glacier. After a full analysis of the planar fitting method to characterize errors of estimates due to complex topography, we developed an improved fitting method by adjusting a binary polynomial surface to local topography. The improved method and the planar fitting method were tested on the accumulation areas of the Naimona’nyi glacier and Yanong glacier on along-track facets with lengths of 1000 m, 1500 m, 2000 m, and 2500 m, respectively. The results show that the improved method gives more reliable estimates of changes in elevation than planar fitting. The improved method was also tested on Guliya glacier with a large and relatively flat area and the Chasku Muba glacier with very complex topography. The results in these test sites demonstrate that the improved method can give estimates of glacier thickness change on glaciers with a large area and a complex topography. Additionally, the improved method based on GLAS Data and Shuttle Radar Topography Mission-Digital Elevation Model (SRTM-DEM can give estimates of glacier thickness change from 2000 to 2008/2009, since it takes the 2000 SRTM-DEM as a reference, which is a longer period than 2004 to 2008/2009, when using the GLAS data only and the planar fitting method.
DEFF Research Database (Denmark)
Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost
2012-01-01
a particular skeleton ad-hoc for repeated execution turns out to be considerably complicated, and raises general questions about introducing state into a stateless parallel computation. In addition, one would strongly prefer an approach which leaves the original skeleton intact, and only uses it as a building...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...
Energy Technology Data Exchange (ETDEWEB)
Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)
2017-10-15
Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)
International Nuclear Information System (INIS)
Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu; Ino, Kenji; Torigoe, Rumiko
2017-01-01
Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
A New Six-Parameter Model Based on Chebyshev Polynomials for Solar Cells
Directory of Open Access Journals (Sweden)
Shu-xian Lun
2015-01-01
Full Text Available This paper presents a new current-voltage (I-V model for solar cells. It has been proved that series resistance of a solar cell is related to temperature. However, the existing five-parameter model ignores the temperature dependence of series resistance and then only accurately predicts the performance of monocrystalline silicon solar cells. Therefore, this paper uses Chebyshev polynomials to describe the relationship between series resistance and temperature. This makes a new parameter called temperature coefficient for series resistance introduced into the single-diode model. Then, a new six-parameter model for solar cells is established in this paper. This new model can improve the accuracy of the traditional single-diode model and reflect the temperature dependence of series resistance. To validate the accuracy of the six-parameter model in this paper, five kinds of silicon solar cells with different technology types, that is, monocrystalline silicon, polycrystalline silicon, thin film silicon, and tripe-junction amorphous silicon, are tested at different irradiance and temperature conditions. Experiment results show that the six-parameter model proposed in this paper is an I-V model with moderate computational complexity and high precision.
Spatial Block Codes Based on Unitary Transformations Derived from Orthonormal Polynomial Sets
Directory of Open Access Journals (Sweden)
Mandyam Giridhar D
2002-01-01
Full Text Available Recent work in the development of diversity transformations for wireless systems has produced a theoretical framework for space-time block codes. Such codes are beneficial in that they may be easily concatenated with interleaved trellis codes and yet still may be decoded separately. In this paper, a theoretical framework is provided for the generation of spatial block codes of arbitrary dimensionality through the use of orthonormal polynomial sets. While these codes cannot maximize theoretical diversity performance for given dimensionality, they still provide performance improvements over the single-antenna case. In particular, their application to closed-loop transmit diversity systems is proposed, as the bandwidth necessary for feedback using these types of codes is fixed regardless of the number of antennas used. Simulation data is provided demonstrating these types of codes′ performance under this implementation as compared not only to the single-antenna case but also to the two-antenna code derived from the Radon-Hurwitz construction.
International Nuclear Information System (INIS)
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2016-01-01
Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2016-01-01
Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.
Virtual fringe projection system with nonparallel illumination based on iteration
International Nuclear Information System (INIS)
Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian
2017-01-01
Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements. (paper)
Irreducible multivariate polynomials obtained from polynomials in ...
Indian Academy of Sciences (India)
Hall, 1409 W. Green Street, Urbana, IL 61801, USA. E-mail: Nicolae. ... Theorem A. If we write an irreducible polynomial f ∈ K[X] as a sum of polynomials a0,..., an ..... This shows us that deg ai = (n − i) deg f2 for each i = 0,..., n, so min k>0.
Directory of Open Access Journals (Sweden)
Jianping Liu
2016-01-01
Full Text Available An operational matrix technique is proposed to solve variable order fractional differential-integral equation based on the second kind of Chebyshev polynomials in this paper. The differential operational matrix and integral operational matrix are derived based on the second kind of Chebyshev polynomials. Using two types of operational matrixes, the original equation is transformed into the arithmetic product of several dependent matrixes, which can be viewed as an algebraic system after adopting the collocation points. Further, numerical solution of original equation is obtained by solving the algebraic system. Finally, several examples show that the numerical algorithm is computationally efficient.
Diouf, C.; Younes, M.; Noaja, A.; Azou, S.; Telescu, M.; Morel, P.; Tanguy, N.
2017-11-01
The linearization performance of various digital baseband pre-distortion schemes is evaluated in this paper for a coherent optical OFDM (CO-OFDM) transmitter employing a semiconductor optical amplifier (SOA). In particular, the benefits of using a parallel two-box (PTB) behavioral model, combining a static nonlinear function with a memory polynomial (MP) model, is investigated for mitigating the system nonlinearities and compared to the memoryless and MP models. Moreover, the robustness of the predistorters under different operating conditions and system uncertainties is assessed based on a precise SOA physical model. The PTB scheme proves to be the most effective linearization technique for the considered setup, with an excellent performance-complexity tradeoff over a wide range of conditions.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
A generalized polynomial chaos based ensemble Kalman filter with high accuracy
International Nuclear Information System (INIS)
Li Jia; Xiu Dongbin
2009-01-01
As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2013-12-01
To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all pASIR and UL-ASIR (all pASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.
Branched polynomial covering maps
DEFF Research Database (Denmark)
Hansen, Vagn Lundsgaard
1999-01-01
A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere....
Bai , Shi; Bouvier , Cyril; Kruppa , Alexander; Zimmermann , Paul
2016-01-01
International audience; The general number field sieve (GNFS) is the most efficient algo-rithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the selected polynomials can be modelled in terms of size and root properties. We propose a new kind of polynomials for GNFS: with a new degree of freedom, we further improve the size property. We demonstrate the efficiency of our algorithm by exhibiting a better polynomial tha...
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical
International Nuclear Information System (INIS)
Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.
2011-01-01
Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.
Branched polynomial covering maps
DEFF Research Database (Denmark)
Hansen, Vagn Lundsgaard
2002-01-01
A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch ...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere. (C) 2001 Elsevier Science B.V. All rights reserved.......A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...
Mechanical design of the ITER ion cyclotron heating launcher based on in-vessel tuning system
Energy Technology Data Exchange (ETDEWEB)
Vulliez, K. [Association Euratom-CEA, CEA/DSM/DRFC, CEA Cadarache, F-13108 St Paul Lez Durance (France)], E-mail: karl.vulliez@cea.fr; Bosia, G. [Dipartimento di Fisica Generale, Universita di Torino (Italy); Agarici, G.; Beaumont, B.; Argouarch, A.; Mollard, P. [Association Euratom-CEA, CEA/DSM/DRFC, CEA Cadarache, F-13108 St Paul Lez Durance (France); Testoni, P. [Electrical and Electronics Engineering Department, University of Cagliari (Italy); Maggiora, R.; Milanesio, D. [Dipartimento di Elettronica Politecnico di Torino (Italy)
2007-10-15
Since the release of the ITER ICRH system reference design report [ITER Final Design Report: DDD 5.1 -Ion Cyclotron and Current Drive System, July 2001], further design studies have been conducted. If the base of the reference design [Final Report on EFDA contract 04/1129, ITER ICRF antenna and Matching system design (Internalmatching), April 2005] is kept unchanged, several significant modifications have been proposed for a better efficiency and reliability. The increase of the poloidal order of the array and strong modifications of the matching system concept are the main changes. Technical aspects insufficiently covered in previous studies are also now worked out in detail, like the integration on a mid-plane port satisfying the constraints of the ITER environment.
Fitting method of pseudo-polynomial for solving nonlinear parametric adjustment
Institute of Scientific and Technical Information of China (English)
陶华学; 宫秀军; 郭金运
2001-01-01
The optimal condition and its geometrical characters of the least-square adjustment were proposed. Then the relation between the transformed surface and least-squares was discussed. Based on the above, a non-iterative method, called the fitting method of pseudo-polynomial, was derived in detail. The final least-squares solution can be determined with sufficient accuracy in a single step and is not attained by moving the initial point in the view of iteration. The accuracy of the solution relys wholly on the frequency of Taylor's series. The example verifies the correctness and validness of the method.
International Nuclear Information System (INIS)
Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk
2014-01-01
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)
2014-04-15
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Simulation-based design process for the verification of ITER remote handling systems
International Nuclear Information System (INIS)
Sibois, Romain; Määttä, Timo; Siuko, Mikko; Mattila, Jouni
2014-01-01
Highlights: •Verification and validation process for ITER remote handling system. •Simulation-based design process for early verification of ITER RH systems. •Design process centralized around simulation lifecycle management system. •Verification and validation roadmap for digital modelling phase. -- Abstract: The work behind this paper takes place in the EFDA's European Goal Oriented Training programme on Remote Handling (RH) “GOT-RH”. The programme aims to train engineers for activities supporting the ITER project and the long-term fusion programme. One of the projects of this programme focuses on the verification and validation (V and V) of ITER RH system requirements using digital mock-ups (DMU). The purpose of this project is to study and develop efficient approach of using DMUs in the V and V process of ITER RH system design utilizing a System Engineering (SE) framework. Complex engineering systems such as ITER facilities lead to substantial rise of cost while manufacturing the full-scale prototype. In the V and V process for ITER RH equipment, physical tests are a requirement to ensure the compliance of the system according to the required operation. Therefore it is essential to virtually verify the developed system before starting the prototype manufacturing phase. This paper gives an overview of the current trends in using digital mock-up within product design processes. It suggests a simulation-based process design centralized around a simulation lifecycle management system. The purpose of this paper is to describe possible improvements in the formalization of the ITER RH design process and V and V processes, in order to increase their cost efficiency and reliability
Cosmographic analysis with Chebyshev polynomials
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
International Nuclear Information System (INIS)
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-01-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-07-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.
Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers
Directory of Open Access Journals (Sweden)
M. Al-Rousan
2005-08-01
Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
Symmetric functions and orthogonal polynomials
Macdonald, I G
1997-01-01
One of the most classical areas of algebra, the theory of symmetric functions and orthogonal polynomials has long been known to be connected to combinatorics, representation theory, and other branches of mathematics. Written by perhaps the most famous author on the topic, this volume explains some of the current developments regarding these connections. It is based on lectures presented by the author at Rutgers University. Specifically, he gives recent results on orthogonal polynomials associated with affine Hecke algebras, surveying the proofs of certain famous combinatorial conjectures.
Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An
2017-11-08
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Directory of Open Access Journals (Sweden)
Lili Gao
2017-11-01
Full Text Available A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC, is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Directory of Open Access Journals (Sweden)
Jean Pierre Astruc
2007-01-01
Full Text Available This paper investigates the mathematical framework of multiresolution analysis based on irregularly spaced knots sequence. Our presentation is based on the construction of nested nonuniform spline multiresolution spaces. From these spaces, we present the construction of orthonormal scaling and wavelet basis functions on bounded intervals. For any arbitrary degree of the spline function, we provide an explicit generalization allowing the construction of the scaling and wavelet bases on the nontraditional sequences. We show that the orthogonal decomposition is implemented using filter banks where the coefficients depend on the location of the knots on the sequence. Examples of orthonormal spline scaling and wavelet bases are provided. This approach can be used to interpolate irregularly sampled signals in an efficient way, by keeping the multiresolution approach.
Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan
2017-12-13
Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability
Directory of Open Access Journals (Sweden)
Muhammad Aslam
2017-12-01
Full Text Available Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN. Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS. This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime
Large degree asymptotics of generalized Bessel polynomials
J.L. López; N.M. Temme (Nico)
2011-01-01
textabstractAsymptotic expansions are given for large values of $n$ of the generalized Bessel polynomials $Y_n^\\mu(z)$. The analysis is based on integrals that follow from the generating functions of the polynomials. A new simple expansion is given that is valid outside a compact neighborhood of the
Technique for image interpolation using polynomial transforms
Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.
1993-01-01
We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is
Application of polynomial preconditioners to conservation laws
Geurts, Bernardus J.; van Buuren, R.; Lu, H.
2000-01-01
Polynomial preconditioners which are suitable in implicit time-stepping methods for conservation laws are reviewed and analyzed. The preconditioners considered are either based on a truncation of a Neumann series or on Chebyshev polynomials for the inverse of the system-matrix. The latter class of
Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime
2013-08-09
The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p standard-dose FBP CT (16.6 ± 2.3 HU, p 70%, MBIR can provide
International Nuclear Information System (INIS)
Shimomura, Y.; Aymar, R.; Chuyanov, V.; Huguet, M.; Parker, R.R.
2001-01-01
This report summarizes technical works of six years done by the ITER Joint Central Team and Home Teams under terms of Agreement of the ITER Engineering Design Activities. The major products are as follows: complete and detailed engineering design with supporting assessments, industrial-based cost estimates and schedule, non-site specific comprehensive safety and environmental assessment, and technology R and D to validate and qualify design including proof of technologies and industrial manufacture and testing of full size or scalable models of key components. The ITER design is at an advanced stage of maturity and contains sufficient technical information for a construction decision. The operation of ITER will demonstrate the availability of a new energy source, fusion. (author)
International Nuclear Information System (INIS)
Shimomura, Y.; Aymar, R.; Chuyanov, V.; Huguet, M.; Parker, R.
1999-01-01
This report summarizes technical works of six years done by the ITER Joint Central Team and Home Teams under terms of Agreement of the ITER Engineering Design Activities. The major products are as follows: complete and detailed engineering design with supporting assessments, industrial-based cost estimates and schedule, non-site specific comprehensive safety and environmental assessment, and technology R and D to validate and qualify design including proof of technologies and industrial manufacture and testing of full size or scalable models of key components. The ITER design is at an advanced stage of maturity and contains sufficient technical information for a construction decision. The operation of ITER will demonstrate the availability of a new energy source, fusion. (author)
Minimal residual method stronger than polynomial preconditioning
Energy Technology Data Exchange (ETDEWEB)
Faber, V.; Joubert, W.; Knill, E. [Los Alamos National Lab., NM (United States)] [and others
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Learning-based identification and iterative learning control of direct-drive robots
Bukkems, B.H.M.; Kostic, D.; Jager, de A.G.; Steinbuch, M.
2005-01-01
A combination of model-based and Iterative Learning Control is proposed as a method to achieve high-quality motion control of direct-drive robots in repetitive motion tasks. We include both model-based and learning components in the total control law, as their individual properties influence the
Understandings of the Concept of Iteration in Design-Based Research
DEFF Research Database (Denmark)
Gundersen, Peter Bukovica
2017-01-01
The paper is the first in a series of papers addressing design in Design-based research. The series looks into the question of how this research approach is connected to design. What happens when educational researchers adopt designerly ways of working? This paper provides an overview of design......-based research and from there on discuss one key characteristic, namely iterations, which are fundamental to educational design research in relation to how designers operate and why. The paper concludes that in general iteration is not a particularly well-described aspect in the reporting of DBR-projects. Half...... and usually after long periods of testing design solutions in practice....
Lam, H K
2012-02-01
This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.
Directory of Open Access Journals (Sweden)
Ayşe Betül Koç
2014-01-01
Full Text Available A pseudospectral method based on the Fibonacci operational matrix is proposed to solve generalized pantograph equations with linear functional arguments. By using this method, approximate solutions of the problems are easily obtained in form of the truncated Fibonacci series. Some illustrative examples are given to verify the efficiency and effectiveness of the proposed method. Then, the numerical results are compared with other methods.
Conceptual design and related R and D on ITER mechanical based primary pumping system
International Nuclear Information System (INIS)
Tanzawa, Sadamitsu; Hiroki, Seiji; Abe, Tetsuya; Shimizu, Katsusuke; Inoue, Masahiko; Watanabe, Mitsunori; Iguchi, Masashi; Sugimoto, Tomoko; Inohara, Takashi; Nakamura, Jun-ichi
2008-12-01
The primary vacuum pumping system of the International Thermonuclear Experimental Reactor (ITER) exhausts a helium (He) ash resulting from the DT-burn with excess DT fueling gas, as well as performing a variety of functions such as pump-down, leak testing and wall conditioning. A mechanical based vacuum pumping system has some merits of a continuous pumping, a much lower tritium inventory, a lower operational cost and easy maintenance, comparing with a cryopump system, although demerits of an indispensable magnetic shield and insufficient performance for hydrogen (H 2 ) pumping is are well recognized. To overcome the demerits, we newly fabricated and tested a helical grooved pump (HGP) unit suitable for H 2 pumping at the ITER divertor pressure of 0.1-10 Pa. Through this R and D, we successfully established many design and manufacturing databases of large HGP units for the lightweight gas pumping. Based on the databases, we conceptually designed the ITER vacuum pumping system mainly comprising the HGP with an optimal pump unit layout and a magnetic shield. We also designed conceptually the reduced cost (RC)-ITER pumping system, where a compound molecular pump combining turbine bladed rotors and helical grooved ones was mainly used. The ITER mechanical based primary pumping system proposed has eventually been a back-up solution, whereas a cryopump based one was formally selected to the ITER for construction. The mechanical pumps are increasingly used in many areas with well sophisticated performance, so we believe that fusion reactors of subsequent prototype ones will select the mechanical based pumping system due to primarily a high operational reliability and a cost melt. (author)
Weierstrass polynomials for links
DEFF Research Database (Denmark)
Hansen, Vagn Lundsgaard
1997-01-01
There is a natural way of identifying links in3-space with polynomial covering spaces over thecircle. Thereby any link in 3-space can be definedby a Weierstrass polynomial over the circle. Theequivalence relation for covering spaces over thecircle is, however, completely different from...
Directory of Open Access Journals (Sweden)
Jatin Chatrath
2018-03-01
Full Text Available Reconfigurable and multi-standard RF front-ends for wireless communication and sensor networks have gained importance as building blocks for the Internet of Things. Simpler and highly-efficient transmitter architectures, which can transmit better quality signals with reduced impairments, are an important step in this direction. In this regard, mixer-less transmitter architecture, namely, the three-way amplitude modulator-based transmitter, avoids the use of imperfect mixers and frequency up-converters, and their resulting distortions, leading to an improved signal quality. In this work, an augmented memory polynomial-based model for the behavioral modeling of such mixer-less transmitter architecture is proposed. Extensive simulations and measurements have been carried out in order to validate the accuracy of the proposed modeling strategy. The performance of the proposed model is evaluated using normalized mean square error (NMSE for long-term evolution (LTE signals. NMSE for a LTE signal of 1.4 MHz bandwidth with 100,000 samples for digital combining and analog combining are recorded as −36.41 dB and −36.9 dB, respectively. Similarly, for a 5 MHz signal the proposed models achieves −31.93 dB and −32.08 dB NMSE using digital and analog combining, respectively. For further validation of the proposed model, amplitude-to-amplitude (AM-AM, amplitude-to-phase (AM-PM, and the spectral response of the modeled and measured data are plotted, reasonably meeting the desired modeling criteria.
Nonnegativity of uncertain polynomials
Directory of Open Access Journals (Sweden)
iljak Dragoslav D.
1998-01-01
Full Text Available The purpose of this paper is to derive tests for robust nonnegativity of scalar and matrix polynomials, which are algebraic, recursive, and can be completed in finite number of steps. Polytopic families of polynomials are considered with various characterizations of parameter uncertainty including affine, multilinear, and polynomic structures. The zero exclusion condition for polynomial positivity is also proposed for general parameter dependencies. By reformulating the robust stability problem of complex polynomials as positivity of real polynomials, we obtain new sufficient conditions for robust stability involving multilinear structures, which can be tested using only real arithmetic. The obtained results are applied to robust matrix factorization, strict positive realness, and absolute stability of multivariable systems involving parameter dependent transfer function matrices.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal
Li, Meng; Jiang, Li-hui; Xiong, Xing-long
2015-06-01
Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.
Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER
Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.
2009-03-01
The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start
Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER
International Nuclear Information System (INIS)
Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.
2009-01-01
The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D - and capable of delivering 16.5 MW of D 0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H - to 100 keV will inject ≅15 A equivalent of H 0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D - and H - current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start
Arabic text classification using Polynomial Networks
Directory of Open Access Journals (Sweden)
Mayy M. Al-Tahrawi
2015-10-01
Full Text Available In this paper, an Arabic statistical learning-based text classification system has been developed using Polynomial Neural Networks. Polynomial Networks have been recently applied to English text classification, but they were never used for Arabic text classification. In this research, we investigate the performance of Polynomial Networks in classifying Arabic texts. Experiments are conducted on a widely used Arabic dataset in text classification: Al-Jazeera News dataset. We chose this dataset to enable direct comparisons of the performance of Polynomial Networks classifier versus other well-known classifiers on this dataset in the literature of Arabic text classification. Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.
Cryptanalysis and improvement on a block cryptosystem based on iteration a chaotic map
International Nuclear Information System (INIS)
Wang Yong; Liao Xiaofeng; Xiang Tao; Wong, Kwok-Wo; Yang Degang
2007-01-01
Recently, a novel block encryption system has been proposed as an improved version of the chaotic cryptographic method based on iterating a chaotic map. In this Letter, a flaw of this cryptosystem is pointed out and a chosen plaintext attack is presented. Furthermore, a remedial improvement is suggested, which avoids the flaw while keeping all the merits of the original cryptosystem
Multi-objective mixture-based iterated density estimation evolutionary algorithms
Thierens, D.; Bosman, P.A.N.
2001-01-01
We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability
On the estimation of the degree of regression polynomial
International Nuclear Information System (INIS)
Toeroek, Cs.
1997-01-01
The mathematical functions most commonly used to model curvature in plots are polynomials. Generally, the higher the degree of the polynomial, the more complex is the trend that its graph can represent. We propose a new statistical-graphical approach based on the discrete projective transformation (DPT) to estimating the degree of polynomial that adequately describes the trend in the plot
Stability analysis of polynomial fuzzy models via polynomial fuzzy Lyapunov functions
Bernal Reza, Miguel Ángel; Sala, Antonio; JAADARI, ABDELHAFIDH; Guerra, Thierry-Marie
2011-01-01
In this paper, the stability of continuous-time polynomial fuzzy models by means of a polynomial generalization of fuzzy Lyapunov functions is studied. Fuzzy Lyapunov functions have been fruitfully used in the literature for local analysis of Takagi-Sugeno models, a particular class of the polynomial fuzzy ones. Based on a recent Taylor-series approach which allows a polynomial fuzzy model to exactly represent a nonlinear model in a compact set of the state space, it is shown that a refinemen...
Iterative volume morphing and learning for mobile tumor based on 4DCT.
Mao, Songan; Wu, Huanmei; Sandison, George; Fang, Shiaofen
2017-02-21
During image-guided cancer radiation treatment, three-dimensional (3D) tumor volumetric information is important for treatment success. However, it is typically not feasible to image a patient's 3D tumor continuously in real time during treatment due to concern over excessive patient radiation dose. We present a new iterative morphing algorithm to predict the real-time 3D tumor volume based on time-resolved computed tomography (4DCT) acquired before treatment. An offline iterative learning process has been designed to derive a target volumetric deformation function from one breathing phase to another. Real-time volumetric prediction is performed to derive the target 3D volume during treatment delivery. The proposed iterative deformable approach for tumor volume morphing and prediction based on 4DCT is innovative because it makes three major contributions: (1) a novel approach to landmark selection on 3D tumor surfaces using a minimum bounding box; (2) an iterative morphing algorithm to generate the 3D tumor volume using mapped landmarks; and (3) an online tumor volume prediction strategy based on previously trained deformation functions utilizing 4DCT. The experimental performance showed that the maximum morphing deviations are 0.27% and 1.25% for original patient data and artificially generated data, which is promising. This newly developed algorithm and implementation will have important applications for treatment planning, dose calculation and treatment validation in cancer radiation treatment.
Polynomial Heisenberg algebras
International Nuclear Information System (INIS)
Carballo, Juan M; C, David J Fernandez; Negro, Javier; Nieto, Luis M
2004-01-01
Polynomial deformations of the Heisenberg algebra are studied in detail. Some of their natural realizations are given by the higher order susy partners (and not only by those of first order, as is already known) of the harmonic oscillator for even-order polynomials. Here, it is shown that the susy partners of the radial oscillator play a similar role when the order of the polynomial is odd. Moreover, it will be proved that the general systems ruled by such kinds of algebras, in the quadratic and cubic cases, involve Painleve transcendents of types IV and V, respectively
Generalizations of orthogonal polynomials
Bultheel, A.; Cuyt, A.; van Assche, W.; van Barel, M.; Verdonk, B.
2005-07-01
We give a survey of recent generalizations of orthogonal polynomials. That includes multidimensional (matrix and vector orthogonal polynomials) and multivariate versions, multipole (orthogonal rational functions) variants, and extensions of the orthogonality conditions (multiple orthogonality). Most of these generalizations are inspired by the applications in which they are applied. We also give a glimpse of these applications, which are usually generalizations of applications where classical orthogonal polynomials also play a fundamental role: moment problems, numerical quadrature, rational approximation, linear algebra, recurrence relations, and random matrices.
A new iterative speech enhancement scheme based on Kalman filtering
DEFF Research Database (Denmark)
Li, Chunjian; Andersen, Søren Vang
2005-01-01
for a high temporal resolution estimation of this variance. A Local Variance Estimator based on a Prediction Error Kalman Filter is designed for this high temporal resolution variance estimation. To achieve fast convergence and avoid local maxima of the likelihood function, a Weighted Power Spectral....... Performance comparison shows significant improvement over the baseline EM algorithm in terms of three objective measures. Listening test indicates an improvement in subjective quality due to a significant reduction of musical noise compared to the baseline EM algorithm....
Fast Template-based Shape Analysis using Diffeomorphic Iterative Centroid
Cury , Claire; Glaunès , Joan Alexis; Chupin , Marie; Colliot , Olivier
2014-01-01
International audience; A common approach for the analysis of anatomical variability relies on the estimation of a representative template of the population, followed by the study of this population based on the parameters of the deformations going from the template to the population. The Large Deformation Diffeomorphic Metric Mapping framework is widely used for shape analysis of anatomical structures, but computing a template with such framework is computationally expensive. In this paper w...
Energy Technology Data Exchange (ETDEWEB)
Cristescu, Ion, E-mail: ion.cristescu@kit.edu
2016-11-01
Highlights: • An enhanced configuration of ITER WDS has been developed. • The proposed configuration allows minimization of hazards due to the reduction of tritium inventory. • The load on the tritium recovery system (ITER ISS) is minimized with benefits on mitigation of the explosion hazards. - Abstract: Tritiated water is generated in the ITER systems by various sources and may contain deuterium and tritium at various concentrations. The reference process for the ITER Water Detritiation System is based on Combined Electrolysis Catalytic Exchange (CECE) configuration. During long time operation of the CECE process, the accumulation of deuterium in the electrolysis unit and consequently along the Liquid Phase Catalytic Exchange (LPCE) column is unavoidable with consequences on the overall detritiation factor of the system. Beside the deuterium issue in the process, the large amount of the tritiated water with tritium activity up to 500 Ci/kg in the electrolysis cells is a concern from the safety aspect of the plant. The enhanced configuration of a system for processing tritiated water allows mitigation of the effects due to deuterium accumulation and also reduction of tritium inventory within the electrolysis system. In addition the benefits concerning to the interface between the water detritiation system and tritium recovery based cryogenic distillation are also presented.
Superiority of legendre polynomials to Chebyshev polynomial in ...
African Journals Online (AJOL)
In this paper, we proved the superiority of Legendre polynomial to Chebyshev polynomial in solving first order ordinary differential equation with rational coefficient. We generated shifted polynomial of Chebyshev, Legendre and Canonical polynomials which deal with solving differential equation by first choosing Chebyshev ...
Extended biorthogonal matrix polynomials
Directory of Open Access Journals (Sweden)
Ayman Shehata
2017-01-01
Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.
Development of an evidence-based review with recommendations using an online iterative process.
Rudmik, Luke; Smith, Timothy L
2011-01-01
The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei
2017-08-01
Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.
Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery
2017-06-01
Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Majeed, Muhammad Usman
2017-07-19
Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.
International Nuclear Information System (INIS)
Troyon, F.
1997-01-01
Recurrent attacks against ITER, the new generation of tokamak are a mix of political and scientific arguments. This short article draws a historical review of the European fusion program. This program has allowed to build and manage several installations in the aim of getting experimental results necessary to lead the program forwards. ITER will bring together a fusion reactor core with technologies such as materials, superconductive coils, heating devices and instrumentation in order to validate and delimit the operating range. ITER will be a logical and decisive step towards the use of controlled fusion. (A.C.)
The development of argon arc brazing with Cu-based filler for ITER thermal anchor attachment
International Nuclear Information System (INIS)
Sun Zhenchao; Li Pengyuan; Pan Chuanjie; Hou Binglin; Han Shilei; Pei Yinyin; Long Weimin
2012-01-01
Thermal anchor is the key component of ITER magnet supports to maintain the low temperature for the nor mal operation of superconducting coils. During the advanced research of ITER thermal anchor attachment, dozens of brazing filler and several kinds of brazing technique have been developed and investigated. The test result shows that Cu-based alloy have the preferable mechanical properties at both room temperature and liquid nitrogen temperatures (77 K) for high brazing temperature. And it has a good weldability to 316LN. The brazing temperature of Cu-based filler is over 1000℃, but heat input is relatively low for shallower heating depth of argon arc brazing. Lower heat input is good for the control of brazing deformation. It is no need to clean after brazing because for argon arc brazing there is no bra- zing flux used. Arc brazing with Cu-based filler was chosen as the principal method for the attachment of thermal anchor. (authors)
Chromatic polynomials for simplicial complexes
DEFF Research Database (Denmark)
Møller, Jesper Michael; Nord, Gesche
2016-01-01
In this note we consider s s -chromatic polynomials for finite simplicial complexes. When s=1 s=1 , the 1 1 -chromatic polynomial is just the usual graph chromatic polynomial of the 1 1 -skeleton. In general, the s s -chromatic polynomial depends on the s s -skeleton and its value at r...
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Alignment Condition-Based Robust Adaptive Iterative Learning Control of Uncertain Robot System
Directory of Open Access Journals (Sweden)
Guofeng Tong
2014-04-01
Full Text Available This paper proposes an adaptive iterative learning control strategy integrated with saturation-based robust control for uncertain robot system in presence of modelling uncertainties, unknown parameter, and external disturbance under alignment condition. An important merit is that it achieves adaptive switching of gain matrix both in conventional PD-type feedforward control and robust adaptive control in the iteration domain simultaneously. The analysis of convergence of proposed control law is based on Lyapunov's direct method under alignment initial condition. Simulation results demonstrate the faster learning rate and better robust performance with proposed algorithm by comparing with other existing robust controllers. The actual experiment on three-DOF robot manipulator shows its better practical effectiveness.
Wang, An; Cao, Yang; Shi, Quan
2018-01-01
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.
Re-design of ITER Glow Discharge Cleaning system based on a fixed electrode concept
International Nuclear Information System (INIS)
Yang, Y.; Maruyama, S.; Kiss, G.; O’Connor, M.; Zhang, Y.; Pitts, R.A.; Shimada, M.; Fang, T.; Wang, Y.; Wang, M.; Pan, Y.; Li, B.; Li, L.
2014-01-01
Highlights: •This paper summarizes the approved new design of ITER GDC. •It is based on the fixed electrode design instead of the previous movable concept. •Estimates were made on the glow current density. •R and D topics on initiation, steady state and heat load were presented. •Other relevant considerations were listed in an exhaustive manner. -- Abstract: A new design of ITER Glow Discharge Cleaning (GDC) system based on a fixed electrode concept replaces the previous design which was based on a movable electrode integrated with the ITER In-Vessel-Viewing-System. Recently the conceptual design of the GDC system was reviewed successfully on the functions, safety, operation and maintenance. The design proposed was checked against the requirements and found to be feasible. This paper gives an overall description of the requirements from physics and operation viewpoints and introduces the design at the conceptual level. Main R and D activities are listed and summarized. Further detailed studies are to be performed in the following design stage
Energy Technology Data Exchange (ETDEWEB)
Notohamiprodjo, S.; Deak, Z.; Meurer, F.; Maertz, F.; Mueck, F.G.; Geyer, L.L.; Wirth, S. [Ludwig-Maximilians University Hospital of Munich, Institute for Clinical Radiology, Munich (Germany)
2015-01-15
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. (orig.)
International Nuclear Information System (INIS)
Notohamiprodjo, S.; Deak, Z.; Meurer, F.; Maertz, F.; Mueck, F.G.; Geyer, L.L.; Wirth, S.
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. (orig.)
A quasi-static polynomial nodal method for nuclear reactor analysis
International Nuclear Information System (INIS)
Gehin, J.C.
1992-09-01
Modern nodal methods are currently available which can accurately and efficiently solve the static and transient neutron diffusion equations. Most of the methods, however, are limited to two energy groups for practical application. The objective of this research is the development of a static and transient, multidimensional nodal method which allows more than two energy groups and uses a non-linear iterative method for efficient solution of the nodal equations. For both the static and transient methods, finite-difference equations which are corrected by the use of discontinuity factors are derived. The discontinuity factors are computed from a polynomial nodal method using a non-linear iteration technique. The polynomial nodal method is based upon a quartic approximation and utilizes a quadratic transverse-leakage approximation. The solution of the time-dependent equations is performed by the use of a quasi-static method in which the node-averaged fluxes are factored into shape and amplitude functions. The application of the quasi-static polynomial method to several benchmark problems demonstrates that the accuracy is consistent with that of other nodal methods. The use of the quasi-static method is shown to substantially reduce the computation time over the traditional fully-implicit time-integration method. Problems involving thermal-hydraulic feedback are accurately, and efficiently, solved by performing several reactivity/thermal-hydraulic updates per shape calculation
A quasi-static polynomial nodal method for nuclear reactor analysis
Energy Technology Data Exchange (ETDEWEB)
Gehin, Jess C. [Massachusetts Inst. of Tech., Cambridge, MA (United States)
1992-09-01
Modern nodal methods are currently available which can accurately and efficiently solve the static and transient neutron diffusion equations. Most of the methods, however, are limited to two energy groups for practical application. The objective of this research is the development of a static and transient, multidimensional nodal method which allows more than two energy groups and uses a non-linear iterative method for efficient solution of the nodal equations. For both the static and transient methods, finite-difference equations which are corrected by the use of discontinuity factors are derived. The discontinuity factors are computed from a polynomial nodal method using a non-linear iteration technique. The polynomial nodal method is based upon a quartic approximation and utilizes a quadratic transverse-leakage approximation. The solution of the time-dependent equations is performed by the use of a quasi-static method in which the node-averaged fluxes are factored into shape and amplitude functions. The application of the quasi-static polynomial method to several benchmark problems demonstrates that the accuracy is consistent with that of other nodal methods. The use of the quasi-static method is shown to substantially reduce the computation time over the traditional fully-implicit time-integration method. Problems involving thermal-hydraulic feedback are accurately, and efficiently, solved by performing several reactivity/thermal-hydraulic updates per shape calculation.
Global Monte Carlo Simulation with High Order Polynomial Expansions
International Nuclear Information System (INIS)
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-01-01
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence
Two polynomial representations of experimental design
Notari, Roberto; Riccomagno, Eva; Rogantin, Maria-Piera
2007-01-01
In the context of algebraic statistics an experimental design is described by a set of polynomials called the design ideal. This, in turn, is generated by finite sets of polynomials. Two types of generating sets are mostly used in the literature: Groebner bases and indicator functions. We briefly describe them both, how they are used in the analysis and planning of a design and how to switch between them. Examples include fractions of full factorial designs and designs for mixture experiments.
Zeng, Fa; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan
2007-10-01
Study of phase retrieval technology is quite meaningful, for its wide applications related to many domains, such as adaptive optics, detection of laser quality, precise measurement of optical surface, and so on. Here a hybrid iterative phase retrieval algorithm is proposed, based on fusion of the intensity information in three defocused planes. First the conjugate gradient algorithm is adapted to achieve a coarse solution of phase distribution in the input plane; then the iterative angular spectrum method is applied in succession for better retrieval result. This algorithm is still applicable even when the exact shape and size of the aperture in the input plane are unknown. Moreover, this algorithm always exhibits good convergence, i.e., the retrieved results are insensitive to the chosen positions of the three defocused planes and the initial guess of complex amplitude in the input plane, which has been proved by both simulations and further experiments.
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Directory of Open Access Journals (Sweden)
Lingjun Liu
2017-01-01
Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.
Wang, Q.; Alfalou, A.; Brosseau, C.
2016-04-01
Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 m
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
International Nuclear Information System (INIS)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng; Fung, Russell; Zhu Chun; Miao Jianwei; Mao Yu; Khatonabadi, Maryam; DeMarco, John J.; McNitt-Gray, Michael F.; Osher, Stanley J.
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest
Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm
International Nuclear Information System (INIS)
Xia Xinyi; Xia Jun
2016-01-01
A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)
Energy Technology Data Exchange (ETDEWEB)
Kaasalainen, Touko; Lampinen, Anniina [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); University of Helsinki, Department of Physics, Helsinki (Finland); Palmu, Kirsi [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); School of Science, Aalto University, Department of Biomedical Engineering and Computational Science, Helsinki (Finland); Reijonen, Vappu; Kortesniemi, Mika [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); Leikola, Junnu [University of Helsinki and Helsinki University Hospital, Department of Plastic Surgery, Helsinki (Finland); Kivisaari, Riku [University of Helsinki and Helsinki University Hospital, Department of Neurosurgery, Helsinki (Finland)
2015-09-15
Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality. (orig.)
Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction
Energy Technology Data Exchange (ETDEWEB)
Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)
2014-07-15
Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model-based
Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction
International Nuclear Information System (INIS)
Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.
2014-01-01
Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model-based
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.
Colouring and knot polynomials
International Nuclear Information System (INIS)
Welsh, D.J.A.
1991-01-01
These lectures will attempt to explain a connection between the recent advances in knot theory using the Jones and related knot polynomials with classical problems in combinatorics and statistical mechanics. The difficulty of some of these problems will be analysed in the context of their computational complexity. In particular we shall discuss colourings and groups valued flows in graphs, knots and the Jones and Kauffman polynomials, the Ising, Potts and percolation problems of statistical physics, computational complexity of the above problems. (author). 20 refs, 9 figs
Additive and polynomial representations
Krantz, David H; Suppes, Patrick
1971-01-01
Additive and Polynomial Representations deals with major representation theorems in which the qualitative structure is reflected as some polynomial function of one or more numerical functions defined on the basic entities. Examples are additive expressions of a single measure (such as the probability of disjoint events being the sum of their probabilities), and additive expressions of two measures (such as the logarithm of momentum being the sum of log mass and log velocity terms). The book describes the three basic procedures of fundamental measurement as the mathematical pivot, as the utiliz
International Nuclear Information System (INIS)
Aymar, R.
1998-01-01
Six years of technical work under the ITER EDA Agreement have resulted in a design which constitutes a complete description of the ITER device and of its auxiliary systems and facilities. The ITER Council commented that the Final Design Report provides the first comprehensive design of a fusion reactor based on well established physics and technology
Objective task-based assessment of low-contrast detectability in iterative reconstruction
International Nuclear Information System (INIS)
Racine, Damien; Ott, Julien G.; Ba, Alexandre; Ryckx, Nick; Bochud, Francois O.; Verdun, Francis R.
2016-01-01
Evaluating image quality by using receiver operating characteristic studies is time consuming and difficult to implement. This work assesses a new iterative algorithm using a channelised Hotelling observer (CHO). For this purpose, an anthropomorphic abdomen phantom with spheres of various sizes and contrasts was scanned at 3 volume computed tomography dose index (CTDI vol ) levels on a GE Revolution CT. Images were reconstructed using the iterative reconstruction method adaptive statistical iterative reconstruction-V (ASIR-V) at ASIR-V 0, 50 and 70 % and assessed by applying a CHO with dense difference of Gaussian and internal noise. Both CHO and human observers (HO) were compared based on a four-alternative forced-choice experiment, using the percentage correct as a figure of merit. The results showed accordance between CHO and HO. Moreover, an improvement in the low-contrast detection was observed when switching from ASIR-V 0 to 50 %. The results underpin the finding that ASIR-V allows dose reduction. (authors)
Comparison of different iterative schemes for ISPH based on Rankine source solution
Directory of Open Access Journals (Sweden)
Xing Zheng
2017-07-01
Full Text Available Smoothed Particle Hydrodynamics (SPH method has a good adaptability for the simulation of free surface flow problems. There are two forms of SPH. One is weak compressible SPH and the other one is incompressible SPH (ISPH. Compared with the former one, ISPH method performs better in many cases. ISPH based on Rankine source solution can perform better than traditional ISPH, as it can use larger stepping length by avoiding the second order derivative in pressure Poisson equation. However, ISPH_R method needs to solve the sparse linear matrix for pressure Poisson equation, which is one of the most expensive parts during one time stepping calculation. Iterative methods are normally used for solving Poisson equation with large particle numbers. However, there are many iterative methods available and the question for using which one is still open. In this paper, three iterative methods, CGS, Bi-CGstab and GMRES are compared, which are suitable and typical for large unsymmetrical sparse matrix solutions. According to the numerical tests on different cases, still water test, dam breaking, violent tank sloshing, solitary wave slamming, the GMRES method is more efficient than CGS and Bi-CGstab for ISPH method.
Dynamic analysis of ITER tokamak. Based on results of vibration test using scaled model
International Nuclear Information System (INIS)
Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka
2005-01-01
The vibration experiments of the support structures with flexible plates for the ITER major components such as toroidal field coil (TF coil) and vacuum vessel (VV) were performed using small-sized flexible plates aiming to obtain its basic mechanical characteristics such as dependence of the stiffness on the loading angle. The experimental results were compared with the analytical ones in order to estimate an adequate analytical model for ITER support structure with flexible plates. As a result, the bolt connection of the flexible plates on the base plate strongly affected on the stiffness of the flexible plates. After studies of modeling the connection of the bolts, it is found that the analytical results modeling the bolts with finite stiffness only in the axial direction and infinite stiffness in the other directions agree well with the experimental ones. Based on this, numerical analysis regarding the actual support structure of the ITER VV and TF coil was performed. The support structure composed of flexible plates and connection bolts was modeled as a spring composed of only two spring elements simulating the in-plane and out-of-plane stiffness of the support structure with flexible plates including the effect of connection bolts. The stiffness of both spring models for VV and TF coil agree well with that of shell models, simulating actual structures such as flexible plates and connection bolts based on the experimental results. It is therefore found that the spring model with the only two values of stiffness enables to simplify the complicated support structure with flexible plates for the dynamic analysis of the VV and TF coil. Using the proposed spring model, the dynamic analysis of the VV and TF coil for the ITER were performed to estimate the integrity under the design earthquake. As a result, it is found that the maximum relative displacement of 8.6 mm between VV and TF coil is much less than 100 mm, so that the integrity of the VV and TF coil of the
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
estimated using least squares and Newton Raphson iterative methods. To determine the order of the ... r is the degree of polynomial while j is the number of lag of the ..... use a real time series dataset, monthly rainfall and temperature series ...
DEFF Research Database (Denmark)
Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander
2017-01-01
Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...
International Nuclear Information System (INIS)
Chen Benfu; Guo Xianchun; Zou Zili
2009-01-01
It' s useful to identify the data with errors from the large number of observations during the process of adjustment to decrease the influence of the errors and to improve the quality of the final surveying result. Based on practical conditions of the nuclear power plant's plain control network, it has been given on how to simply calculate the threshold value which used to pre-weight each datum before adjustment calculation; it shows some superiorities in efficiency on data snooping and in quality of the final calculation compared with some traditional methods such as robust estimation, which process data with dynamic weight based the observation' s correction after each iteration. (authors)
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
Modeling of the lithium based neutralizer for ITER neutral beam injector
Energy Technology Data Exchange (ETDEWEB)
Dure, F., E-mail: franck.dure@u-psud.fr [LPGP, Laboratoire de Physique des Gaz et Plasmas, CNRS-Universite Paris Sud, Orsay (France); Lifschitz, A.; Bretagne, J.; Maynard, G. [LPGP, Laboratoire de Physique des Gaz et Plasmas, CNRS-Universite Paris Sud, Orsay (France); Simonin, A. [IRFM, Institut de Recherche sur la Fusion Magnetique, CEA Cadarache, 13108 Saint-Paul lez Durance (France); Minea, T. [LPGP, Laboratoire de Physique des Gaz et Plasmas, CNRS-Universite Paris Sud, Orsay (France)
2012-04-04
Highlights: Black-Right-Pointing-Pointer We compare different lithium based neutraliser configurations to the deuterium one. Black-Right-Pointing-Pointer We study characteristics of the secondary plasma and the propagation of the 1 MeV beam. Black-Right-Pointing-Pointer Using lithium increases the neutralisation effiency keeping correct beam focusing. Black-Right-Pointing-Pointer Using lithium also reduces the backstreaming effect in direction of the ion source. - Abstract: To achieve thermonuclear temperatures necessary to produce fusion reactions in the ITER Tokamak, additional heating systems are required. One of the main method to heat the plasma ions in ITER will be the injection of energetic neutrals (NBI). In the neutral beam injector, negative ions (D{sup -}) are electrostatically accelerated to 1 MeV, and then stripped of their extra electron via collisions with a target gas, in a structure known as neutralizer. In the current ITER specification, the target gas is deuterium. It has been recently proposed to use lithium vapor instead of deuterium as target gas in the neutralizer. This would allow to reduce the gas load in the NBI vessel and to improve the neutralization efficiency. A Particle-in-Cell Monte Carlo code has been developed to study the transport of the beams and the plasma formation in the neutralizer. A comparison between Li and D{sub 2} based neutralizers made with this code is presented here, as well as a parametric study on the geometry of the Li based neutralizer. Results demonstrate the feasibility of a Li based neutralizer, and its advantages with respect to the deuterium based one.
Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”
Directory of Open Access Journals (Sweden)
Ji-Huan He
2012-01-01
boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.
Energy Technology Data Exchange (ETDEWEB)
Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)
2014-11-15
Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)
International Nuclear Information System (INIS)
Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio
2014-01-01
Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)
International Nuclear Information System (INIS)
Stacey, W.M.; Hertel, N.E.; Hoffman, E.A.
1994-01-01
The potential for providing energy with minimal environmental impact is a powerful motivation for the development of fusion and is the long-term objective of most fusion programs. However, the societal acceptability of magnetic fusion may well be decided in the near-term when decisions are taken on the construction of DEMO to follow ITER (if not when the construction decision is taken on ITER). Component wastes were calculated for DEMOs based on each data base by first calculating reactor sizes needed to satisfy the physics, stress and radiation attenuation requirements, and then calculating component replacement rates based on radiation damage and erosion limits. Then, radioactive inventories were calculated and compared to a number of international criteria for open-quote near-surface close-quote burial. None of the components in either type of design would meet the Japanese LLW criterion ( 3 ) within 10 years of shutdown, although the advanced (V/Li) blanket would do so soon afterwards. The vanadium first wall, divertor and blanket would satisfy the IAEA LLW criterion (<2 mSv/h contact dose) within about 10 years after shutdown, but none of the stainless steel or copper components would. All the components in the advanced data base designs except the stainless steel vacuum vessel and shield readily satisfy the US extended 10CFR61 intruder dose criterion, but none of the components in the open-quotes ITER data baseclose quotes designs do so. It seems unlikely that a stainless steel first wall or a copper divertor plate could satisfy the US (class C) criterion for near surface burial, much less the more stringent international, criteria. On the other hand, the first wall, divertor and blanket of the V/Li system would still satisfy the intruder dose concentration limits even if the dose criterion was reduced by two orders of magnitude
On the Laurent polynomial rings
International Nuclear Information System (INIS)
Stefanescu, D.
1985-02-01
We describe some properties of the Laurent polynomial rings in a finite number of indeterminates over a commutative unitary ring. We study some subrings of the Laurent polynomial rings. We finally obtain two cancellation properties. (author)
Computing the Alexander Polynomial Numerically
DEFF Research Database (Denmark)
Hansen, Mikael Sonne
2006-01-01
Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....
International Nuclear Information System (INIS)
Liu, Yang; Chen, Zhenyu; Yang, Zhile; Li, Kang; Tan, Jiubin
2016-01-01
The accuracy of surface measurement determines the manufacturing quality of membrane mirrors. Thus, an efficient and accurate measuring method is critical in membrane mirror fabrication. This paper formulates this measurement issue as a surface reconstruction problem and employs two-stage trained Zernike polynomials as an inline measuring tool to solve the optical surface measurement problem in the membrane mirror manufacturing process. First, all terms of the Zernike polynomial are generated and projected to a non-circular region as the candidate model pool. The training data are calculated according to the measured values of distance sensors and the geometrical relationship between the ideal surface and the installed sensors. Then the terms are selected by minimizing the cost function each time successively. To avoid the problem of ill-conditioned matrix inversion by the least squares method, the coefficient of each model term is achieved by modified elitist teaching–learning-based optimization. Subsequently, the measurement precision is further improved by a second stage of model refinement. Finally, every point on the membrane surface can be measured according to this model, providing more the subtle feedback information needed for the precise control of membrane mirror fabrication. Experimental results confirm that the proposed method is effective in a membrane mirror manufacturing system driven by negative pressure, and the measurement accuracy can achieve 15 µ m. (paper)
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
International Nuclear Information System (INIS)
Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long
2016-01-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)
A novel chaotic block cryptosystem based on iterating map with output-feedback
International Nuclear Information System (INIS)
Yang Degang; Liao Xiaofeng; Wang Yong; Yang Huaqian; Wei Pengcheng
2009-01-01
A novel method for encryption based on iterating map with output-feedback is presented in this paper. The output-feedback, instead of simply mixing the chaotic signal of the proposed chaotic cryptosystem with the cipher-text, is relating to previous cipher-text that is obtained through the plaintext and key. Some simulated experiments are performed to substantiate that our method can make cipher-text more confusion and diffusion and that the proposed method is practical whenever efficiency, cipher-text length or security is concerned.
Directory of Open Access Journals (Sweden)
Lun Zhai
2014-01-01
Full Text Available A parametric learning based robust iterative learning control (ILC scheme is applied to the time varying delay multiple-input and multiple-output (MIMO linear systems. The convergence conditions are derived by using the H∞ and linear matrix inequality (LMI approaches, and the convergence speed is analyzed as well. A practical identification strategy is applied to optimize the learning laws and to improve the robustness and performance of the control system. Numerical simulations are illustrated to validate the above concepts.
Hydrogen embrittlement considerations in niobium-base alloys for application in the ITER divertor
International Nuclear Information System (INIS)
Peterson, D.T.; Hull, A.B.; Loomis, B.A.
1991-01-01
The ITER divertor will be subjected to hydrogen from aqueous corrosion by the coolant and by transfer from the plasma. Global hydrogen concentrations are one factor in assessing hydrogen embrittlement but local concentrations affected by source fluxes and thermotransport in thermal gradients are more important considerations. Global hydrogen concentrations is some corrosion- tested alloys will be presented and interpreted. The degradation of mechanical properties of Nb-base alloys due to hydrogen is a complex function of temperature, hydrogen concentration, stresses and alloy composition. The known tendencies for embrittlement and hydride formation in Nb alloys are reviewed
Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing
Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng
2017-05-01
Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.
Polynomial optimization : Error analysis and applications
Sun, Zhao
2015-01-01
Polynomial optimization is the problem of minimizing a polynomial function subject to polynomial inequality constraints. In this thesis we investigate several hierarchies of relaxations for polynomial optimization problems. Our main interest lies in understanding their performance, in particular how
Directory of Open Access Journals (Sweden)
Pengfei Sun
Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Roots of the Chromatic Polynomial
DEFF Research Database (Denmark)
Perrett, Thomas
The chromatic polynomial of a graph G is a univariate polynomial whose evaluation at any positive integer q enumerates the proper q-colourings of G. It was introduced in connection with the famous four colour theorem but has recently found other applications in the field of statistical physics...... extend Thomassen’s technique to the Tutte polynomial and as a consequence, deduce a density result for roots of the Tutte polynomial. This partially answers a conjecture of Jackson and Sokal. Finally, we refocus our attention on the chromatic polynomial and investigate the density of chromatic roots...
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Polynomials in algebraic analysis
Multarzyński, Piotr
2012-01-01
The concept of polynomials in the sense of algebraic analysis, for a single right invertible linear operator, was introduced and studied originally by D. Przeworska-Rolewicz \\cite{DPR}. One of the elegant results corresponding with that notion is a purely algebraic version of the Taylor formula, being a generalization of its usual counterpart, well known for functions of one variable. In quantum calculus there are some specific discrete derivations analyzed, which are right invertible linear ...
International Nuclear Information System (INIS)
Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok
2013-01-01
Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a
Zhang, B.; Sang, Jun; Alam, Mohammad S.
2013-03-01
An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.
Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems
Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio
2014-01-01
Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274
General Reducibility and Solvability of Polynomial Equations ...
African Journals Online (AJOL)
General Reducibility and Solvability of Polynomial Equations. ... Unlike quadratic, cubic, and quartic polynomials, the general quintic and higher degree polynomials cannot be solved algebraically in terms of finite number of additions, ... Galois Theory, Solving Polynomial Systems, Polynomial factorization, Polynomial Ring ...
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Comparison of ITER performance predicted by semi-empirical and theory-based transport models
International Nuclear Information System (INIS)
Mukhovatov, V.; Shimomura, Y.; Polevoi, A.
2003-01-01
The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)
Global sensitivity analysis by polynomial dimensional decomposition
Energy Technology Data Exchange (ETDEWEB)
Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2011-07-15
This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.
Laboratory-based validation of the baseline sensors of the ITER diagnostic residual gas analyzer
International Nuclear Information System (INIS)
Klepper, C.C.; Biewer, T.M.; Marcus, C.; Graves, V.B.; Andrew, P.; Hughes, S.; Gardner, W.L.
2017-01-01
The divertor-specific ITER Diagnostic Residual Gas Analyzer (DRGA) will provide essential information relating to DT fusion plasma performance. This includes pulse-resolving measurements of the fuel isotopic mix reaching the pumping ducts, as well as the concentration of the helium generated as the ash of the fusion reaction. In the present baseline design, the cluster of sensors attached to this diagnostic's differentially pumped analysis chamber assembly includes a radiation compatible version of a commercial quadrupole mass spectrometer, as well as an optical gas analyzer using a plasma-based light excitation source. This paper reports on a laboratory study intended to validate the performance of this sensor cluster, with emphasis on the detection limit of the isotopic measurement. This validation study was carried out in a laboratory set-up that closely prototyped the analysis chamber assembly configuration of the baseline design. This includes an ITER-specific placement of the optical gas measurement downstream from the first turbine of the chamber's turbo-molecular pump to provide sufficient light emission while preserving the gas dynamics conditions that allow for /textasciitilde 1 s response time from the sensor cluster [1].
Laboratory-based validation of the baseline sensors of the ITER diagnostic residual gas analyzer
Klepper, C. C.; Biewer, T. M.; Marcus, C.; Andrew, P.; Gardner, W. L.; Graves, V. B.; Hughes, S.
2017-10-01
The divertor-specific ITER Diagnostic Residual Gas Analyzer (DRGA) will provide essential information relating to DT fusion plasma performance. This includes pulse-resolving measurements of the fuel isotopic mix reaching the pumping ducts, as well as the concentration of the helium generated as the ash of the fusion reaction. In the present baseline design, the cluster of sensors attached to this diagnostic's differentially pumped analysis chamber assembly includes a radiation compatible version of a commercial quadrupole mass spectrometer, as well as an optical gas analyzer using a plasma-based light excitation source. This paper reports on a laboratory study intended to validate the performance of this sensor cluster, with emphasis on the detection limit of the isotopic measurement. This validation study was carried out in a laboratory set-up that closely prototyped the analysis chamber assembly configuration of the baseline design. This includes an ITER-specific placement of the optical gas measurement downstream from the first turbine of the chamber's turbo-molecular pump to provide sufficient light emission while preserving the gas dynamics conditions that allow for \\textasciitilde 1 s response time from the sensor cluster [1].
International Nuclear Information System (INIS)
Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo
2012-01-01
Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.
Model-based iterative learning control of Parkinsonian state in thalamic relay neuron
Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile
2014-09-01
Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao
2018-06-01
In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.
International Nuclear Information System (INIS)
Liu, Y B; Su, Y M; Ju, L; Huang, S L
2012-01-01
A new numerical method was developed for predicting the steady hydrodynamic performance of propeller-rudder-bulb system. In the calculation, the rudder and bulb was taken into account as a whole, the potential based surface panel method was applied both to propeller and rudder-bulb system. The interaction between propeller and rudder-bulb was taken into account by velocity potential iteration in which the influence of propeller rotation was considered by the average influence coefficient. In the influence coefficient computation, the singular value should be found and deleted. Numerical results showed that the method presented is effective for predicting the steady hydrodynamic performance of propeller-rudder system and propeller-rudder-bulb system. Comparing with the induced velocity iterative method, the method presented can save programming and calculation time. Changing dimensions, the principal parameter—bulb size that affect energy-saving effect was studied, the results show that the bulb on rudder have a optimal size at the design advance coefficient.
A protection system for the JET ITER-like wall based on imaging diagnostics
Energy Technology Data Exchange (ETDEWEB)
Arnoux, G.; Balboa, I.; Balshaw, N.; Beldishevski, M.; Cramp, S.; Felton, R.; Goodyear, A.; Horton, A.; Kinna, D.; McCullen, P.; Obrejan, K.; Patel, K.; Lomas, P. J.; Rimini, F.; Stamp, M.; Stephen, A.; Thomas, P. D.; Williams, J.; Wilson, J.; Zastrow, K.-D. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); and others
2012-10-15
The new JET ITER-like wall (made of beryllium and tungsten) is more fragile than the former carbon fiber composite wall and requires active protection to prevent excessive heat loads on the plasma facing components (PFC). Analog CCD cameras operating in the near infrared wavelength are used to measure surface temperature of the PFCs. Region of interest (ROI) analysis is performed in real time and the maximum temperature measured in each ROI is sent to the vessel thermal map. The protection of the ITER-like wall system started in October 2011 and has already successfully led to a safe landing of the plasma when hot spots were observed on the Be main chamber PFCs. Divertor protection is more of a challenge due to dust deposits that often generate false hot spots. In this contribution we describe the camera, data capture and real time processing systems. We discuss the calibration strategy for the temperature measurements with cross validation with thermal IR cameras and bi-color pyrometers. Most importantly, we demonstrate that a protection system based on CCD cameras can work and show examples of hot spot detections that stop the plasma pulse. The limits of such a design and the associated constraints on the operations are also presented.
Thermo-hydraulic and structural analysis for finger-based concept of ITER blanket first wall
International Nuclear Information System (INIS)
Kim, Byoung-Yoon; Ahn, Hee-Jae
2011-01-01
The blanket first wall is one of the main plasma facing components in ITER tokamak. The finger-typed first wall was proposed through the current design progress by ITER organization. In this concept, each first wall module is composed of a beam and twenty fingers. The main function of the first wall is to remove efficiently the high heat flux loading from the fusion plasma during its operation. Therefore, the thermal and structural performance should be investigated for the proposed finger-based design concept of first wall. The various case studies were performed for a unit finger model considering different loading conditions. The finite element model was made for a half of a module using symmetric boundary conditions to reduce the computational effort. The thermo-hydraulic analysis was performed to obtain the pressure drop and temperature profiles. Then the structural analysis was carried out using the maximum temperature distribution obtained in thermo-hydraulic analysis. Finally, the transient thermo-hydraulic analysis was performed for the generic first wall module to obtain the temperature evolution history considering cyclic heat flux loading with nuclear heating. After that, the thermo-mechanical analysis was performed at the time step when the maximum temperature gradient was occurred. Also, the stress analysis was performed for the component with a finger and a beam to check the residual stress of the component after thermal shrinkage assembly.
Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun
2017-03-01
H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.
Laboratory-based validation of the baseline sensors of the ITER diagnostic residual gas analyzer
Energy Technology Data Exchange (ETDEWEB)
Biewer, Theodore M. [ORNL; Marcus, Chris [ORNL; Klepper, C Christopher [ORNL; Andrew, Philip [ITER Organization, Cadarache, France; Gardner, W. L. [United States ITER Project Office; Graves, Van B. [ORNL; Hughes, Shaun [ITER Organization, Saint Paul Lez Durance, France
2017-10-01
The divertor-specific ITER Diagnostic Residual Gas Analyzer (DRGA) will provide essential information relating to DT fusion plasma performance. This includes pulse-resolving measurements of the fuel isotopic mix reaching the pumping ducts, as well as the concentration of the helium generated as the ash of the fusion reaction. In the present baseline design, the cluster of sensors attached to this diagnostic's differentially pumped analysis chamber assembly includes a radiation compatible version of a commercial quadrupole mass spectrometer, as well as an optical gas analyzer using a plasma-based light excitation source. This paper reports on a laboratory study intended to validate the performance of this sensor cluster, with emphasis on the detection limit of the isotopic measurement. This validation study was carried out in a laboratory set-up that closely prototyped the analysis chamber assembly configuration of the baseline design. This includes an ITER-specific placement of the optical gas measurement downstream from the first turbine of the chamber's turbo-molecular pump to provide sufficient light emission while preserving the gas dynamics conditions that allow for \\textasciitilde 1 s response time from the sensor cluster [1].
Energy Technology Data Exchange (ETDEWEB)
Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)
2016-05-15
To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.
Iterative observer based method for source localization problem for Poisson equation in 3D
Majeed, Muhammad Usman
2017-07-10
A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data estimation problems for Laplace equation over the 3D domain. The solution of each of these boundary estimation problems involves writing down the mathematical problem in state-space-like representation using one of the space variables as time-like. First, system observability result for 3D boundary estimation problem is recalled in an infinite dimensional setting. Then, based on the observability result, the boundary estimation problem is decomposed into a set of independent 2D sub-problems. These 2D problems are then solved using an iterative observer to obtain the solution. Theoretical results are provided. The method is implemented numerically using finite difference discretization schemes. Numerical illustrations along with simulation results are provided.
NetCDF based data archiving system applied to ITER Fast Plant System Control prototype
International Nuclear Information System (INIS)
Castro, R.; Vega, J.; Ruiz, M.; De Arcas, G.; Barrera, E.; López, J.M.; Sanz, D.; Gonçalves, B.; Santos, B.; Utzel, N.; Makijarvi, P.
2012-01-01
Highlights: ► Implementation of a data archiving solution for a Fast Plant System Controller (FPSC) for ITER CODAC. ► Data archiving solution based on scientific NetCDF-4 file format and Lustre storage clustering. ► EPICS control based solution. ► Tests results and detailed analysis of using NetCDF-4 and clustering technologies on fast acquisition data archiving. - Abstract: EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development of a FPSC (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumentation). One of the main focuses of this project has been data acquisition and all the related issues, including scientific data archiving. Additionally, a new data archiving solution has been developed to demonstrate the obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant System Control. The presented system implements a fault tolerant architecture over a GEthernet network where FPSC data are reliably archived on remote, while remaining accessible to be redistributed, within the duration of a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSC management and configuration may be simplified, and a unique view of all archived data provided. All the involved components have been integrated under EPICS (Experimental Physics and Industrial Control System), implementing in each case the necessary extensions, state machines and configuration process variables. The prototyped solution is based on the NetCDF-4 (Network Common Data Format) file format in order to incorporate important features, such as scientific data models support, huge size files management, platform independent codification, or single-writer/multiple-readers concurrency. In this contribution, a complete description of the above mentioned solution is presented, together with the most relevant results of the tests performed, while focusing in the
International Nuclear Information System (INIS)
Kim, Jin Hyeok; Choo, Ki Seok; Moon, Tae Yong; Lee, Jun Woo; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yun, Myeong-Ja; Jeong, Dong Wook; Lim, Soo Jin
2016-01-01
To evaluate the subjective and objective qualities of computed tomography (CT) venography images at 80 kVp using model-based iterative reconstruction (MBIR) and to compare these with those of filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) using the same CT data sets. Forty-four patients (mean age: 56.1 ± 18.1) who underwent 80 kVp CT venography (CTV) for the evaluation of deep vein thrombosis (DVT) during 4 months were enrolled in this retrospective study. The same raw data were reconstructed using FBP, ASIR, and MBIR. Objective and subjective image analysis were performed at the inferior vena cava (IVC), femoral vein, and popliteal vein. The mean CNR of MBIR was significantly greater than those of FBP and ASIR and images reconstructed using MBIR had significantly lower objective image noise (p <.001). Subjective image quality and confidence of detecting DVT by MBIR group were significantly greater than those of FBP and ASIR (p <.005), and MBIR had the lowest score for subjective image noise (p <.001). CTV at 80 kVp with MBIR was superior to FBP and ASIR regarding subjective and objective image qualities. (orig.)
Polynomial approximation on polytopes
Totik, Vilmos
2014-01-01
Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.
International Nuclear Information System (INIS)
Milks, Matthew M; Guise, Hubert de
2005-01-01
The construction of su(2) intelligent states is simplified using a polynomial representation of su(2). The cornerstone of the new construction is the diagonalization of a 2 x 2 matrix. The method is sufficiently simple to be easily extended to su(3), where one is required to diagonalize a single 3 x 3 matrix. For two perfectly general su(3) operators, this diagonalization is technically possible but the procedure loses much of its simplicity owing to the algebraic form of the roots of a cubic equation. Simplified expressions can be obtained by specializing the choice of su(3) operators. This simpler construction will be discussed in detail
Energy Technology Data Exchange (ETDEWEB)
Harder, Annemarie M. den, E-mail: a.m.denharder@umcutrecht.nl [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Wolterink, Jelmer M. [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Willemink, Martin J.; Schilham, Arnold M.R.; Jong, Pim A. de [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Budde, Ricardo P.J. [Department of Radiology, Erasmus Medical Center, Rotterdam (Netherlands); Nathoe, Hendrik M. [Department of Cardiology, University Medical Center Utrecht, Utrecht (Netherlands); Išgum, Ivana [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Leiner, Tim [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands)
2016-11-15
Highlights: • Iterative reconstruction (IR) allows for low dose coronary calcium scoring (CCS). • Radiation dose can be safely reduced to 0.4 mSv with hybrid and model-based IR. • FBP is not feasible at these dose levels due to excessive noise. - Abstract: Purpose: To determine the effect of model-based iterative reconstruction (IR) on coronary calcium quantification using different submillisievert CT acquisition protocols. Methods: Twenty-eight patients received a clinically indicated non contrast-enhanced cardiac CT. After the routine dose acquisition, low-dose acquisitions were performed with 60%, 40% and 20% of the routine dose mAs. Images were reconstructed with filtered back projection (FBP), hybrid IR (HIR) and model-based IR (MIR) and Agatston scores, calcium volumes and calcium mass scores were determined. Results: Effective dose was 0.9, 0.5, 0.4 and 0.2 mSv, respectively. At 0.5 and 0.4 mSv, differences in Agatston scores with both HIR and MIR compared to FBP at routine dose were small (−0.1 to −2.9%), while at 0.2 mSv, differences in Agatston scores of −12.6 to −14.6% occurred. Reclassification of risk category at reduced dose levels was more frequent with MIR (21–25%) than with HIR (18%). Conclusions: Radiation dose for coronary calcium scoring can be safely reduced to 0.4 mSv using both HIR and MIR, while FBP is not feasible at these dose levels due to excessive noise. Further dose reduction can lead to an underestimation in Agatston score and subsequent reclassification to lower risk categories. Mass scores were unaffected by dose reductions.
Using a web-based, iterative education model to enhance clinical clerkships.
Alexander, Erik K; Bloom, Nurit; Falchuk, Kenneth H; Parker, Michael
2006-10-01
Although most clinical clerkship curricula are designed to provide all students consistent exposure to defined course objectives, it is clear that individual students are diverse in their backgrounds and baseline knowledge. Ideally, the learning process should be individualized towards the strengths and weakness of each student, but, until recently, this has proved prohibitively time-consuming. The authors describe a program to develop and evaluate an iterative, Web-based educational model assessing medical students' knowledge deficits and allowing targeted teaching shortly after their identification. Beginning in 2002, a new educational model was created, validated, and applied in a prospective fashion to medical students during an internal medicine clerkship at Harvard Medical School. Using a Web-based platform, five validated questions were delivered weekly and a specific knowledge deficiency identified. Teaching targeted to the deficiency was provided to an intervention cohort of five to seven students in each clerkship, though not to controls (the remaining 7-10 students). Effectiveness of this model was assessed by performance on the following week's posttest question. Specific deficiencies were readily identified weekly using this model. Throughout the year, however, deficiencies varied unpredictably. Teaching targeted to deficiencies resulted in significantly better performance on follow-up questioning compared to the performance of those who did not receive this intervention. This model was easily applied in an additive fashion to the current curriculum, and student acceptance was high. The authors conclude that a Web-based, iterative assessment model can effectively target specific curricular needs unique to each group; focus teaching in a rapid, formative, and highly efficient manner; and may improve the efficiency of traditional clerkship teaching.
Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials
Directory of Open Access Journals (Sweden)
Oksana Bihun
2018-01-01
Full Text Available Via a generalization of the pseudospectral method for numerical solution of differential equations, a family of nonlinear algebraic identities satisfied by the zeros of a wide class of orthogonal polynomials is derived. The generalization is based on a modification of pseudospectral matrix representations of linear differential operators proposed in the paper, which allows these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for every polynomial family pνxν=0∞ orthogonal with respect to a measure supported on the real line that satisfies some standard assumptions, as long as the polynomials in the family satisfy differential equations Apν(x=qν(xpν(x, where A is a linear differential operator and each qν(x is a polynomial of degree at most n0∈N; n0 does not depend on ν. The proposed identities generalize known identities for classical and Krall orthogonal polynomials, to the case of the nonclassical orthogonal polynomials that belong to the class described above. The generalized pseudospectral representations of the differential operator A for the case of the Sonin-Markov orthogonal polynomials, also known as generalized Hermite polynomials, are presented. The general result is illustrated by new algebraic relations satisfied by the zeros of the Sonin-Markov polynomials.
Directory of Open Access Journals (Sweden)
Xiaohong Jiao
2014-07-01
Full Text Available This paper demonstrates an energy management method using traffic information for commuter hybrid electric vehicles. A control strategy based on stochastic dynamic programming (SDP is developed, which minimizes on average the equivalent fuel consumption, while satisfying the battery charge-sustaining constraints and the overall vehicle power demand for drivability. First, according to the sample information of the traffic speed profiles, the regular route is divided into several segments and the statistic characteristics in the different segments are constructed from gathered data on the averaged vehicle speeds. And then, the energy management problem is formulated as a stochastic nonlinear and constrained optimal control problem and a modified policy iteration algorithm is utilized to generate a time-invariant state-dependent power split strategy. Finally, simulation results over some driving cycles are presented to demonstrate the effectiveness of the proposed energy management strategy.
Optimization of a bolometer detector for ITER based on Pt absorber on SiN membrane
Energy Technology Data Exchange (ETDEWEB)
Meister, H.; Eich, T.; Endstrasser, N.; Giannone, L.; Kannamueller, M.; Kling, A.; Koll, J.; Trautmann, T. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching (Germany); Detemple, P.; Schmitt, S. [Institut fuer Mikrotechnik Mainz GmbH, Carl-Zeiss-Str. 18-20, D-55129 Mainz (Germany); Collaboration: ASDEX Upgrade Team
2010-10-15
Any plasma diagnostic in ITER must be able to operate at temperatures in excess of 200 deg. C and neutron loads corresponding to 0.1 dpa over its lifetime. To achieve this aim for the bolometer diagnostic, a miniaturized metal resistor bolometer detector based on Pt absorbers galvanically deposited on SiN membranes is being developed. The first two generations of detectors featured up to 4.5 {mu}m thick absorbers. Results from laboratory tests are presented characterizing the dependence of their calibration constants under thermal loads up to 450 deg. C. Several detectors have been tested in ASDEX Upgrade providing reliable data but also pointing out the need for further optimization. A laser trimming procedure has been implemented to reduce the mismatch in meander resistances below 1% for one detector and the thermal drifts from this mismatch.
Optimization of a bolometer detector for ITER based on Pt absorber on SiN membranea)
Meister, H.; Eich, T.; Endstrasser, N.; Giannone, L.; Kannamüller, M.; Kling, A.; Koll, J.; Trautmann, T.; ASDEX Upgrade Team; Detemple, P.; Schmitt, S.
2010-10-01
Any plasma diagnostic in ITER must be able to operate at temperatures in excess of 200 °C and neutron loads corresponding to 0.1 dpa over its lifetime. To achieve this aim for the bolometer diagnostic, a miniaturized metal resistor bolometer detector based on Pt absorbers galvanically deposited on SiN membranes is being developed. The first two generations of detectors featured up to 4.5 μm thick absorbers. Results from laboratory tests are presented characterizing the dependence of their calibration constants under thermal loads up to 450 °C. Several detectors have been tested in ASDEX Upgrade providing reliable data but also pointing out the need for further optimization. A laser trimming procedure has been implemented to reduce the mismatch in meander resistances below 1% for one detector and the thermal drifts from this mismatch.
Sun, Aihui; Tian, Xiaolin; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-01-01
As a lensfree imaging technique, ptychographic iterative engine (PIE) method can provide both quantitative sample amplitude and phase distributions avoiding aberration. However, it requires field of view (FoV) scanning often relying on mechanical translation, which not only slows down measuring speed, but also introduces mechanical errors decreasing both resolution and accuracy in retrieved information. In order to achieve high-accurate quantitative imaging with fast speed, digital micromirror device (DMD) is adopted in PIE for large FoV scanning controlled by on/off state coding by DMD. Measurements were implemented using biological samples as well as USAF resolution target, proving high resolution in quantitative imaging using the proposed system. Considering its fast and accurate imaging capability, it is believed the DMD based PIE technique provides a potential solution for medical observation and measurements.
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...
Energy Technology Data Exchange (ETDEWEB)
Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)
2009-10-15
The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries
International Nuclear Information System (INIS)
Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung
2009-01-01
The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries
Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty
Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design
Physics research needs for ITER
International Nuclear Information System (INIS)
Sauthoff, N.R.
1995-01-01
Design of ITER entails the application of physics design tools that have been validated against the world-wide data base of fusion research. In many cases, these tools do not yet exist and must be developed as part of the ITER physics program. ITER's considerable increases in power and size demand significant extrapolations from the current data base; in several cases, new physical effects are projected to dominate the behavior of the ITER plasma. This paper focuses on those design tools and data that have been identified by the ITER team and are not yet available; these needs serve as the basis for the ITER Physics Research Needs, which have been developed jointly by the ITER Physics Expert Groups and the ITER design team. Development of the tools and the supporting data base is an on-going activity that constitutes a significant opportunity for contributions to the ITER program by fusion research programs world-wide
Exponential time paradigms through the polynomial time lens
Drucker, A.; Nederlof, J.; Santhanam, R.; Sankowski, P.; Zaroliagis, C.
2016-01-01
We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as
International Nuclear Information System (INIS)
Barry, J.M.; Pollard, J.P.
1986-11-01
A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels
Influence of iterative image reconstruction on CT-based calcium score measurements
van Osch, Jochen A. C.; Mouden, Mohamed; van Dalen, Jorn A.; Timmer, Jorik R.; Reiffers, Stoffer; Knollema, Siert; Greuter, Marcel J. W.; Ottervanger, Jan Paul; Jager, Piet L.
Iterative reconstruction techniques for coronary CT angiography have been introduced as an alternative for traditional filter back projection (FBP) to reduce image noise, allowing improved image quality and a potential for dose reduction. However, the impact of iterative reconstruction on the
International Nuclear Information System (INIS)
Tang Jie; Nett, Brian E; Chen Guanghong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Hand-Eye LRF-Based Iterative Plane Detection Method for Autonomous Robotic Welding
Directory of Open Access Journals (Sweden)
Sungmin Lee
2015-12-01
Full Text Available This paper proposes a hand-eye LRF-based (laser range finder welding plane-detection method for autonomous robotic welding in the field of shipbuilding. The hand-eye LRF system consists of a 6 DOF manipulator and an LRF attached to the wrist of the manipulator. The welding plane is detected by the LRF with only the wrist's rotation to minimize a mechanical error caused by the manipulator's motion. A position on the plane is determined as an average position of the detected points on the plane, and a normal vector to the plane is determined by applying PCA (principal component analysis to the detected points. In this case, the accuracy of the detected plane is analysed by simulations with respect to the wrist's angle interval and the plane angle. As a result of the analysis, an iterative plane-detection method with the manipulator's alignment motion is proposed to improve the performance of plane detection. For verifying the feasibility and effectiveness of the proposed plane-detection method, experiments are carried out with a prototype of the hand-eye LRF-based system, which consists of a 1 DOF wrist's joint, an LRF system and a rotatable plane. In addition, the experimental results of the PCA-based plane detection method are compared with those of the two representative plane-detection methods, based on RANSAC (RANdom SAmple Consensus and the 3D Hough transform in both accuracy and computation time's points of view.
International Nuclear Information System (INIS)
Pask, J.E.; Klein, B.M.; Fong, C.Y.; Sterne, P.A.
1999-01-01
We present an approach to solid-state electronic-structure calculations based on the finite-element method. In this method, the basis functions are strictly local, piecewise polynomials. Because the basis is composed of polynomials, the method is completely general and its convergence can be controlled systematically. Because the basis functions are strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited to parallel implementation. The method thus combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate ab initio calculations. We develop the theory of our approach in detail, discuss advantages and disadvantages, and report initial results, including electronic band structures and details of the convergence of the method. copyright 1999 The American Physical Society
Freund, Roland
1988-01-01
Conjugate gradient type methods are considered for the solution of large linear systems Ax = b with complex coefficient matrices of the type A = T + i(sigma)I where T is Hermitian and sigma, a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices are proposed. Error bounds for all three methods are derived. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning. Results on the optimal choice of the polynomial preconditioner are given. Also, some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation are reported.
Polynomial methods in combinatorics
Guth, Larry
2016-01-01
This book explains some recent applications of the theory of polynomials and algebraic geometry to combinatorics and other areas of mathematics. One of the first results in this story is a short elegant solution of the Kakeya problem for finite fields, which was considered a deep and difficult problem in combinatorial geometry. The author also discusses in detail various problems in incidence geometry associated to Paul Erdős's famous distinct distances problem in the plane from the 1940s. The proof techniques are also connected to error-correcting codes, Fourier analysis, number theory, and differential geometry. Although the mathematics discussed in the book is deep and far-reaching, it should be accessible to first- and second-year graduate students and advanced undergraduates. The book contains approximately 100 exercises that further the reader's understanding of the main themes of the book. Some of the greatest advances in geometric combinatorics and harmonic analysis in recent years have been accompl...
Polynomial representations of GLn
Green, James A; Erdmann, Karin
2007-01-01
The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.
Polynomial representations of GLN
Green, James A
1980-01-01
The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.
Directory of Open Access Journals (Sweden)
Jihang Sun
Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.
International Nuclear Information System (INIS)
Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni
2012-01-01
To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 ± 3.00) than low-dose ASIR (49.24 ± 9.11, P < 0.01) and reference-dose ASIR images (24.93 ± 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Bunkyo-ku, Tokyo (Japan)
2012-08-15
To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 {+-} 3.00) than low-dose ASIR (49.24 {+-} 9.11, P < 0.01) and reference-dose ASIR images (24.93 {+-} 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)
Efficient computation of Laguerre polynomials
A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)
2017-01-01
textabstractAn efficient algorithm and a Fortran 90 module (LaguerrePol) for computing Laguerre polynomials . Ln(α)(z) are presented. The standard three-term recurrence relation satisfied by the polynomials and different types of asymptotic expansions valid for . n large and . α small, are used
Optimization over polynomials : Selected topics
Laurent, M.; Jang, Sun Young; Kim, Young Rock; Lee, Dae-Woong; Yie, Ikkwon
2014-01-01
Minimizing a polynomial function over a region defined by polynomial inequalities models broad classes of hard problems from combinatorics, geometry and optimization. New algorithmic approaches have emerged recently for computing the global minimum, by combining tools from real algebra (sums of
Polynomials formalism of quantum numbers
International Nuclear Information System (INIS)
Kazakov, K.V.
2005-01-01
Theoretical aspects of the recently suggested perturbation formalism based on the method of quantum number polynomials are considered in the context of the general anharmonicity problem. Using a biatomic molecule by way of example, it is demonstrated how the theory can be extrapolated to the case of vibrational-rotational interactions. As a result, an exact expression for the first coefficient of the Herman-Wallis factor is derived. In addition, the basic notions of the formalism are phenomenologically generalized and expanded to the problem of spin interaction. The concept of magneto-optical anharmonicity is introduced. As a consequence, an exact analogy is drawn with the well-known electro-optical theory of molecules, and a nonlinear dependence of the magnetic dipole moment of the system on the spin and wave variables is established [ru
Fatigue assessment of the ITER TF coil case based on JJ1 fatigue tests
International Nuclear Information System (INIS)
Hamada, K.; Nakajima, H.; Takano, K.; Kudo, Y.; Tsutsumi, F.; Okuno, K.; Jong, C.
2005-01-01
The material of the TF coil case in the ITER requires to withstand cyclic electromagnetic forces applied up to 3 x 10 4 cycles at 4.2 K. A cryogenic stainless steel, JJ1, is used in high stress region of TF coil case. The fatigue characteristics (S-N curve) of JJ1 base metal and welded joint at 4.2 K has been measured. The fatigue strength of base metal and welded joint at 3 x 10 4 cycles are measured as 1032 and 848 MPa, respectively. The design S-N curve is derived from the measured data taking account of the safety factor of 20 for cycle-to-failure and 2 for fatigue strength, and it indicates that an equivalent alternating stress of the case should be kept less than 516 MPa for the base metal and 424 MPa for the welded joint at 3 x 10 4 cycles. It is demonstrated that the TF coil case has enough margins for the cyclic operation. It is also shown the welded joint should be located in low cyclic stress region because a residual stress affects the fatigue life
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Energy Technology Data Exchange (ETDEWEB)
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Paul, Sabyasachi; Sarkar, P K
2013-04-01
Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.
A dimension decomposition approach based on iterative observer design for an elliptic Cauchy problem
Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem
2015-01-01
A state observer inspired iterative algorithm is presented to solve boundary estimation problem for Laplace equation using one of the space variables as a time-like variable. Three dimensional domain with two congruent parallel surfaces
Higher order branching of periodic orbits from polynomial isochrones
Directory of Open Access Journals (Sweden)
B. Toni
1999-09-01
Full Text Available We discuss the higher order local bifurcations of limit cycles from polynomial isochrones (linearizable centers when the linearizing transformation is explicitly known and yields a polynomial perturbation one-form. Using a method based on the relative cohomology decomposition of polynomial one-forms complemented with a step reduction process, we give an explicit formula for the overall upper bound of branch points of limit cycles in an arbitrary $n$ degree polynomial perturbation of the linear isochrone, and provide an algorithmic procedure to compute the upper bound at successive orders. We derive a complete analysis of the nonlinear cubic Hamiltonian isochrone and show that at most nine branch points of limit cycles can bifurcate in a cubic polynomial perturbation. Moreover, perturbations with exactly two, three, four, six, and nine local families of limit cycles may be constructed.
Primitive polynomials selection method for pseudo-random number generator
Anikin, I. V.; Alnajjar, Kh
2018-01-01
In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.
Liu, Chuang; Lam, H. K.
2015-01-01
In this paper, we propose a polynomial fuzzy observer controller for nonlinear systems, where the design is achieved through the stability analysis of polynomial-fuzzy-model-based (PFMB) observer-control system. The polynomial fuzzy observer estimates the system states using estimated premise variables. The estimated states are then employed by the polynomial fuzzy controller for the feedback control of nonlinear systems represented by the polynomial fuzzy model. The system stability of the P...
Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.
2015-12-01
A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.
On generalized Fibonacci and Lucas polynomials
Energy Technology Data Exchange (ETDEWEB)
Nalli, Ayse [Department of Mathematics, Faculty of Sciences, Selcuk University, 42075 Campus-Konya (Turkey)], E-mail: aysenalli@yahoo.com; Haukkanen, Pentti [Department of Mathematics, Statistics and Philosophy, 33014 University of Tampere (Finland)], E-mail: mapehau@uta.fi
2009-12-15
Let h(x) be a polynomial with real coefficients. We introduce h(x)-Fibonacci polynomials that generalize both Catalan's Fibonacci polynomials and Byrd's Fibonacci polynomials and also the k-Fibonacci numbers, and we provide properties for these h(x)-Fibonacci polynomials. We also introduce h(x)-Lucas polynomials that generalize the Lucas polynomials and present properties of these polynomials. In the last section we introduce the matrix Q{sub h}(x) that generalizes the Q-matrix whose powers generate the Fibonacci numbers.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in
International Nuclear Information System (INIS)
Raeder, J.; Piet, S.; Buende, R.
1991-01-01
As part of the series of publications by the IAEA that summarize the results of the Conceptual Design Activities for the ITER project, this document describes the ITER safety analyses. It contains an assessment of normal operation effluents, accident scenarios, plasma chamber safety, tritium system safety, magnet system safety, external loss of coolant and coolant flow problems, and a waste management assessment, while it describes the implementation of the safety approach for ITER. The document ends with a list of major conclusions, a set of topical remarks on technical safety issues, and recommendations for the Engineering Design Activities, safety considerations for siting ITER, and recommendations with regard to the safety issues for the R and D for ITER. Refs, figs and tabs
Liu, Hui; Li, Yingzi; Zhang, Yingxu; Chen, Yifu; Song, Zihang; Wang, Zhenyu; Zhang, Suoxin; Qian, Jianqiang
2018-01-01
Proportional-integral-derivative (PID) parameters play a vital role in the imaging process of an atomic force microscope (AFM). Traditional parameter tuning methods require a lot of manpower and it is difficult to set PID parameters in unattended working environments. In this manuscript, an intelligent tuning method of PID parameters based on iterative learning control is proposed to self-adjust PID parameters of the AFM according to the sample topography. This method gets enough information about the output signals of PID controller and tracking error, which will be used to calculate the proper PID parameters, by repeated line scanning until convergence before normal scanning to learn the topography. Subsequently, the appropriate PID parameters are obtained by fitting method and then applied to the normal scanning process. The feasibility of the method is demonstrated by the convergence analysis. Simulations and experimental results indicate that the proposed method can intelligently tune PID parameters of the AFM for imaging different topographies and thus achieve good tracking performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2015-01-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2016-01-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.
Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging
Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2016-02-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1
Fuzzy based method for project planning of the infrastructure design for the diagnostic in ITER
International Nuclear Information System (INIS)
Piros, Attila; Veres, Gábor
2013-01-01
The long-term design projects need special preparation before the start of the execution. This preparation usually includes the drawing of the network diagram for the whole procedure. This diagram includes the time estimation of the individual subtasks and gives us information about the predicted dates of the milestones. The calculated critical path in this network characterizes a specific design project concerning to its duration very well. Several methods are available to support this step of preparation. This paper describes a new method to map the structure of the design process and clarify the milestones and predict the dates of these milestones. The method is based on the PERT (Project Evaluation and Review Technique) network but as a novelty it applies fuzzy logic to find out the concerning times in this graph. With the application of the fuzzy logic the handling of the different kinds of design uncertainties becomes feasible. Many kinds of design uncertainties exist from the possible electric blackout up to the illness of an engineer. In many cases these uncertainties are related with human errors and described with linguistic expressions. The fuzzy logic enables to transform these ambiguous expressions into numeric values for further mathematical evaluation. The method is introduced in the planning of the design project of the infrastructure for the diagnostic systems of ITER. The method not only helps the project in the planning phase, but it will be a powerful tool in mathematical modeling and monitoring of the project execution
Xu, Jiayuan; Yu, Chengtao; Bo, Bin; Xue, Yu; Xu, Changfu; Chaminda, P. R. Dushantha; Hu, Chengbo; Peng, Kai
2018-03-01
The automatic recognition of the high voltage isolation switch by remote video monitoring is an effective means to ensure the safety of the personnel and the equipment. The existing methods mainly include two ways: improving monitoring accuracy and adopting target detection technology through equipment transformation. Such a method is often applied to specific scenarios, with limited application scope and high cost. To solve this problem, a high voltage isolation switch state recognition method based on background difference and iterative search is proposed in this paper. The initial position of the switch is detected in real time through the background difference method. When the switch starts to open and close, the target tracking algorithm is used to track the motion trajectory of the switch. The opening and closing state of the switch is determined according to the angle variation of the switch tracking point and the center line. The effectiveness of the method is verified by experiments on different switched video frames of switching states. Compared with the traditional methods, this method is more robust and effective.
Fuzzy based method for project planning of the infrastructure design for the diagnostic in ITER
Energy Technology Data Exchange (ETDEWEB)
Piros, Attila, E-mail: attila.piros@gt3.bme.hu [Department of Machine and Product Design, Budapest University of Technology and Economics, Budapest (Hungary); Veres, Gábor [Department of Plasma Physics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest (Hungary)
2013-10-15
The long-term design projects need special preparation before the start of the execution. This preparation usually includes the drawing of the network diagram for the whole procedure. This diagram includes the time estimation of the individual subtasks and gives us information about the predicted dates of the milestones. The calculated critical path in this network characterizes a specific design project concerning to its duration very well. Several methods are available to support this step of preparation. This paper describes a new method to map the structure of the design process and clarify the milestones and predict the dates of these milestones. The method is based on the PERT (Project Evaluation and Review Technique) network but as a novelty it applies fuzzy logic to find out the concerning times in this graph. With the application of the fuzzy logic the handling of the different kinds of design uncertainties becomes feasible. Many kinds of design uncertainties exist from the possible electric blackout up to the illness of an engineer. In many cases these uncertainties are related with human errors and described with linguistic expressions. The fuzzy logic enables to transform these ambiguous expressions into numeric values for further mathematical evaluation. The method is introduced in the planning of the design project of the infrastructure for the diagnostic systems of ITER. The method not only helps the project in the planning phase, but it will be a powerful tool in mathematical modeling and monitoring of the project execution.
Budianto; Lun, Daniel P K
2015-12-01
Conventional fringe projection profilometry methods often have difficulty in reconstructing the 3D model of objects when the fringe images have the so-called highlight regions due to strong illumination from nearby light sources. Within a highlight region, the fringe pattern is often overwhelmed by the strong reflected light. Thus, the 3D information of the object, which is originally embedded in the fringe pattern, can no longer be retrieved. In this paper, a novel inpainting algorithm is proposed to restore the fringe images in the presence of highlights. The proposed method first detects the highlight regions based on a Gaussian mixture model. Then, a geometric sketch of the missing fringes is made and used as the initial guess of an iterative regularization procedure for regenerating the missing fringes. The simulation and experimental results show that the proposed algorithm can accurately reconstruct the 3D model of objects even when their fringe images have large highlight regions. It significantly outperforms the traditional approaches in both quantitative and qualitative evaluations.
Iterative student-based testing of automated information-handling exercises
Directory of Open Access Journals (Sweden)
C. K. Ramaiah
1995-12-01
Full Text Available Much laboratory teaching of information-handling involves students in evaluating information provided either online or via a computer package. A lecturer can help students carry out these tasks in a variety of ways. In particular, it is customary to provide students with hand-outs, and there is good evidence that such hand-outs are a valuable resource, especially for lower-ability students (see, for example Saloman, 1979. In many of these exercises, students are passive receivers of information, in the sense that they assess the information but do not change it. However, it is sometimes possible to use student feedback to change the original input. In this case, the users' mental models of the system can be employed to modify the user-interface set up by the original designer (see Moran, 1981. A number of experiments have been carried out in the Department of Information and Library Studies at Loughborough University to examine how computer interfaces and instruction sheets used in teaching can be improved by student feedback. The present paper discusses examples of this work to help suggest both the factors to be taken into account and the sorts of changes involved. Our approach has been based on the concept of 'iterative usability testing', the value of which has recently been emphasized by Shneiderman (1993.
Directory of Open Access Journals (Sweden)
Yunhan Lin
2016-01-01
Full Text Available It is a necessary mean to realize the accurate motion control of the manipulator which uses end-effector pose correction method and compensation method. In this article, first, we established the kinematic model and error model of the modular manipulator (WUST-ARM, and then we discussed the measurement methods and precision of the inertial measurement unit sensor. The inertial measurement unit sensor is mounted on the end-effector of modular manipulator, to get the real-time pose of the end-effector. At last, a new inertial measurement unit–based iterative pose compensation algorithm is proposed. By applying this algorithm in the pose compensation experiment of modular manipulator which is composed of low-cost rotation joints, the results show that the inertial measurement unit can obtain a higher precision when in static state; it will accurately feedback to the control system with an accurate error compensation angle after a brief delay when the end-effector moves to the target point, and after compensation, the precision errors of roll angle, pitch angle, and yaw angle are reached at 0.05°, 0.01°, and 0.27° respectively. It proves that this low-cost method provides a new solution to improve the end-effector pose of low-cost modular manipulator.
Siripatana, Adil
2017-06-08
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning’s n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and
Parallel Construction of Irreducible Polynomials
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg
Let arithmetic pseudo-NC^k denote the problems that can be solved by log space uniform arithmetic circuits over the finite prime field GF(p) of depth O(log^k (n + p)) and size polynomial in (n + p). We show that the problem of constructing an irreducible polynomial of specified degree over GF(p) ...... of polynomials is in arithmetic NC^3. Our algorithm works over any field and compared to other known algorithms it does not assume the ability to take p'th roots when the field has characteristic p....
Orthogonal polynomials in transport theories
International Nuclear Information System (INIS)
Dehesa, J.S.
1981-01-01
The asymptotical (k→infinity) behaviour of zeros of the polynomials gsub(k)sup((m)(ν)) encountered in the treatment of direct and inverse problems of scattering in neutron transport as well as radiative transfer theories is investigated in terms of the amplitude antiwsub(k) of the kth Legendre polynomial needed in the expansion of the scattering function. The parameters antiwsub(k) describe the anisotropy of scattering of the medium considered. In particular, it is shown that the asymptotical density of zeros of the polynomials gsub(k)sup(m)(ν) is an inverted semicircle for the anisotropic non-multiplying scattering medium
Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You
2018-05-18
RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.
Xie, Q.; Lu, S.; Costola, D.; Hensen, J.L.M.
2014-01-01
In performance-based fire protection design of buildings, much attention is paid to design parameters by fire engineers or experts. However, due to the time-consuming evacuation models, it is computationally prohibitive to adopt the conventional Monte Carlo simulation (MCS) to examine the effect of
Directory of Open Access Journals (Sweden)
Jiangjun Ruan
2017-04-01
Full Text Available The resistivity of oil impregnated paper will decrease during its aging process. This paper takes paper resistivity as an assessment index to evaluate the insulation condition of oil impregnated paper in power transformer. The feasibility of this method are discussed in two aspects: reliability and sensitivity. Iterative inversion of paper resistivity was combined with finite element simulation. Both the bisection method and Newton’s method were used as iterative methods. After the analysis and comparison, Newton’s method was selected as the first option of paper resistivity iteration for its faster convergence. In order to consider the spatial distribution characteristic of paper aging and enhance the calculation accuracy, the resistivity calculation is expanded to a multivariate iteration based on Newton’s method, in order to consider the spatial distribution characteristic of paper aging and improve the calculation accuracy. This paper presents an exploratory research on condition assessment of oil impregnated paper insulation, and provides some reference to the security and economy operation of power transformers.
Erickson, Jonathan C; Putney, Joy; Hilbert, Douglas; Paskaranandavadivel, Niranchan; Cheng, Leo K; O'Grady, Greg; Angeli, Timothy R
2016-11-01
The aim of this study was to develop, validate, and apply a fully automated method for reducing large temporally synchronous artifacts present in electrical recordings made from the gastrointestinal (GI) serosa, which are problematic for properly assessing slow wave dynamics. Such artifacts routinely arise in experimental and clinical settings from motion, switching behavior of medical instruments, or electrode array manipulation. A novel iterative Covariance-Based Reduction of Artifacts (COBRA) algorithm sequentially reduced artifact waveforms using an updating across-channel median as a noise template, scaled and subtracted from each channel based on their covariance. Application of COBRA substantially increased the signal-to-artifact ratio (12.8 ± 2.5 dB), while minimally attenuating the energy of the underlying source signal by 7.9% on average ( -11.1 ± 3.9 dB). COBRA was shown to be highly effective for aiding recovery and accurate marking of slow wave events (sensitivity = 0.90 ± 0.04; positive-predictive value = 0.74 ± 0.08) from large segments of in vivo porcine GI electrical mapping data that would otherwise be lost due to a broad range of contaminating artifact waveforms. Strongly reducing artifacts with COBRA ultimately allowed for rapid production of accurate isochronal activation maps detailing the dynamics of slow wave propagation in the porcine intestine. Such mapping studies can help characterize differences between normal and dysrhythmic events, which have been associated with GI abnormalities, such as intestinal ischemia and gastroparesis. The COBRA method may be generally applicable for removing temporally synchronous artifacts in other biosignal processing domains.
Directory of Open Access Journals (Sweden)
Hongfeng Tao
2018-01-01
Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.
Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.
Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos
2013-11-04
In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.
Julia Sets of Orthogonal Polynomials
DEFF Research Database (Denmark)
Christiansen, Jacob Stordal; Henriksen, Christian; Petersen, Henrik Laurberg
2018-01-01
For a probability measure with compact and non-polar support in the complex plane we relate dynamical properties of the associated sequence of orthogonal polynomials fPng to properties of the support. More precisely we relate the Julia set of Pn to the outer boundary of the support, the lled Julia...... set to the polynomial convex hull K of the support, and the Green's function associated with Pn to the Green's function for the complement of K....
An introduction to orthogonal polynomials
Chihara, Theodore S
1978-01-01
Assuming no further prerequisites than a first undergraduate course in real analysis, this concise introduction covers general elementary theory related to orthogonal polynomials. It includes necessary background material of the type not usually found in the standard mathematics curriculum. Suitable for advanced undergraduate and graduate courses, it is also appropriate for independent study. Topics include the representation theorem and distribution functions, continued fractions and chain sequences, the recurrence formula and properties of orthogonal polynomials, special functions, and some
Scattering theory and orthogonal polynomials
International Nuclear Information System (INIS)
Geronimo, J.S.
1977-01-01
The application of the techniques of scattering theory to the study of polynomials orthogonal on the unit circle and a finite segment of the real line is considered. The starting point is the recurrence relations satisfied by the polynomials instead of the orthogonality condition. A set of two two terms recurrence relations for polynomials orthogonal on the real line is presented and used. These recurrence relations play roles analogous to those satisfied by polynomials orthogonal on unit circle. With these recurrence formulas a Wronskian theorem is proved and the Christoffel-Darboux formula is derived. In scattering theory a fundamental role is played by the Jost function. An analogy is deferred of this function and its analytic properties and the locations of its zeros investigated. The role of the analog Jost function in various properties of these orthogonal polynomials is investigated. The techniques of inverse scattering theory are also used. The discrete analogues of the Gelfand-Levitan and Marchenko equations are derived and solved. These techniques are used to calculate asymptotic formulas for the orthogonal polynomials. Finally Szego's theorem on toeplitz and Hankel determinants is proved using the recurrence formulas and some properties of the Jost function. The techniques of inverse scattering theory are used to calculate the correction terms
Force prediction in cold rolling mills by polynomial methods
Directory of Open Access Journals (Sweden)
Nicu ROMAN
2007-12-01
Full Text Available A method for steel and aluminium strip thickness control is provided including a new technique for predictive rolling force estimation method by statistic model based on polynomial techniques.
Hard exudates segmentation based on learned initial seeds and iterative graph cut.
Kusakunniran, Worapan; Wu, Qiang; Ritthipravat, Panrasee; Zhang, Jian
2018-05-01
(Background and Objective): The occurrence of hard exudates is one of the early signs of diabetic retinopathy which is one of the leading causes of the blindness. Many patients with diabetic retinopathy lose their vision because of the late detection of the disease. Thus, this paper is to propose a novel method of hard exudates segmentation in retinal images in an automatic way. (Methods): The existing methods are based on either supervised or unsupervised learning techniques. In addition, the learned segmentation models may often cause miss-detection and/or fault-detection of hard exudates, due to the lack of rich characteristics, the intra-variations, and the similarity with other components in the retinal image. Thus, in this paper, the supervised learning based on the multilayer perceptron (MLP) is only used to identify initial seeds with high confidences to be hard exudates. Then, the segmentation is finalized by unsupervised learning based on the iterative graph cut (GC) using clusters of initial seeds. Also, in order to reduce color intra-variations of hard exudates in different retinal images, the color transfer (CT) is applied to normalize their color information, in the pre-processing step. (Results): The experiments and comparisons with the other existing methods are based on the two well-known datasets, e_ophtha EX and DIARETDB1. It can be seen that the proposed method outperforms the other existing methods in the literature, with the sensitivity in the pixel-level of 0.891 for the DIARETDB1 dataset and 0.564 for the e_ophtha EX dataset. The cross datasets validation where the training process is performed on one dataset and the testing process is performed on another dataset is also evaluated in this paper, in order to illustrate the robustness of the proposed method. (Conclusions): This newly proposed method integrates the supervised learning and unsupervised learning based techniques. It achieves the improved performance, when compared with the
Energy Technology Data Exchange (ETDEWEB)
Eck, Brendan L.; Fahmi, Rachid; Miao, Jun [Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106 (United States); Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun [Philips Healthcare, Cleveland, Ohio 44143 (United States); Wilson, David L., E-mail: dlw@case.edu [Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106 and Department of Radiology, Case Western Reserve University, Cleveland, Ohio 44106 (United States)
2015-10-15
Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, P{sub C}. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit
International Nuclear Information System (INIS)
Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Bai, Mei; Li, Pengyu; Liu, Jiabin; Li, Kuncheng
2013-01-01
Purpose: To investigate the potential of noise-based tube current reduction method with iterative reconstruction to reduce radiation exposure while achieving consistent image quality in coronary CT angiography (CCTA). Materials and methods: 294 patients underwent CCTA on a 64-detector row CT equipped with iterative reconstruction. 102 patients with fixed tube current were assigned to Group 1, which was used to establish noise-based tube current modulation formulas, where tube current was modulated by the noise of test bolus image. 192 patients with noise-based tube current were randomly assigned to Group 2 and Group 3. Filtered back projection was applied for Group 2 and iterative reconstruction for Group 3. Qualitative image quality was assessed with a 5 point score. Image noise, signal intensity, volume CT dose index, and dose-length product were measured. Results: The noise-based tube current modulation formulas were established through regression analysis using image noise measurements in Group 1. Image noise was precisely maintained at the target value of 35.00 HU with small interquartile ranges for Group 2 (34.17–35.08 HU) and Group 3 (34.34–35.03 HU), while it was from 28.41 to 36.49 HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 14% and 41% reduction in effective dose for Group 2 and Group 3 were observed compared with Group 1. Conclusion: Adequate image quality could be maintained at a desired and consistent noise level with overall 14% dose reduction using noise-based tube current reduction method. The use of iterative reconstruction further achieved approximately 40% reduction in effective dose
Cicone, A; Liu, J; Zhou, H
2016-04-13
Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).
Directory of Open Access Journals (Sweden)
Juan Carlos Davila
2017-06-01
Full Text Available The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.
Davila, Juan Carlos; Cretu, Ana-Maria; Zaremba, Marek
2017-06-07
The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.
Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias
2015-07-01
Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.
Bannai-Ito polynomials and dressing chains
Derevyagin, Maxim; Tsujimoto, Satoshi; Vinet, Luc; Zhedanov, Alexei
2012-01-01
Schur-Delsarte-Genin (SDG) maps and Bannai-Ito polynomials are studied. SDG maps are related to dressing chains determined by quadratic algebras. The Bannai-Ito polynomials and their kernel polynomials -- the complementary Bannai-Ito polynomials -- are shown to arise in the framework of the SDG maps.
Birth-death processes and associated polynomials
van Doorn, Erik A.
2003-01-01
We consider birth-death processes on the nonnegative integers and the corresponding sequences of orthogonal polynomials called birth-death polynomials. The sequence of associated polynomials linked with a sequence of birth-death polynomials and its orthogonalizing measure can be used in the analysis
Energy Technology Data Exchange (ETDEWEB)
Prajapati, Rajnikant, E-mail: rajnikant@iter-india.org [ITER-India, Institute For Plasma Research, A-29, GIDC Electronics Estate, Sector-25, Gandhinagar 382016 (India); Bhardwaj, Anil K.; Gupta, Girish; Joshi, Vaibhav; Patel, Mitul; Bhavsar, Jagrut; More, Vipul; Jindal, Mukesh; Bhattacharya, Avik; Jogi, Gaurav; Palaliya, Amit; Jha, Saroj; Pandey, Manish [ITER-India, Institute For Plasma Research, A-29, GIDC Electronics Estate, Sector-25, Gandhinagar 382016 (India); Jadhav, Pandurang; Desai, Hemal [Larsen & Toubro Limited, Heavy Engineering, Hazira Manufacturing Complex, Gujarat (India)
2016-11-01
Highlights: • ITER Cryostat base section sandwich structure bottom plate to rib weld joint is qualified through mock-up. • Established welding sequence was successfully implemented on all six sectors of cryostat base section. • Each layer liquid penetrant examination has been carried out for these weld joints and found satisfactory. - Abstract: Cryostat is a large stainless steel vacuum vessel providing vacuum environment to ITER machine components. The cryostat is ∼30 m in diameter and ∼30 m in height having variable thickness from 25 mm to 180 mm. Sandwich structure of cryostat base section withstands vacuum loading and limits the deformation under service conditions. Sandwich structure consists of top and bottom plates internally strengthened with radial and circular ribs. In current work, sandwich structure bottom plate to rib weld joint has been designed with full penetration joint as per ITER Vacuum Handbook requirement considering nondestructive examinations and welding feasibility. Since this joint was outside the scope of ASME Section VIII Div. 2, it was decided to validate through mock-up of bottom plate to rib joint. Welding sequence was established to control the distortion. Tensile test, macro-structural examination and layer by layer LPE were carried out for validation of this weld joint. However possibility of ultrasonic examination method was also investigated. The test results from the welded joint mock-up were found to confirm all code and specification requirements. The same was implemented in first sector (0–60°) of base section sandwich structure.
Discrete-Time Filter Synthesis using Product of Gegenbauer Polynomials
N. Stojanovic; N. Stamenkovic; I. Krstic
2016-01-01
A new approximation to design continuoustime and discrete-time low-pass filters, presented in this paper, based on the product of Gegenbauer polynomials, provides the ability of more flexible adjustment of passband and stopband responses. The design is achieved taking into account a prescribed specification, leading to a better trade-off among the magnitude and group delay responses. Many well-known continuous-time and discrete-time transitional filter based on the classical polynomial approx...
International Nuclear Information System (INIS)
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One; Ha, Seongmin
2016-01-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose 4 , levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose 4 levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose 4 level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose 4 obtained at 1.81 mSv. (orig.)
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One
2016-03-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.
On Multiple Polynomials of Capelli Type
Directory of Open Access Journals (Sweden)
S.Y. Antonov
2016-03-01
Full Text Available This paper deals with the class of Capelli polynomials in free associative algebra F{Z} (where F is an arbitrary field, Z is a countable set generalizing the construction of multiple Capelli polynomials. The fundamental properties of the introduced Capelli polynomials are provided. In particular, decomposition of the Capelli polynomials by means of the same type of polynomials is shown. Furthermore, some relations between their T -ideals are revealed. A connection between double Capelli polynomials and Capelli quasi-polynomials is established.
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Harmonic sums and polylogarithms generated by cyclotomic polynomials
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2011-05-15
The computation of Feynman integrals in massive higher order perturbative calculations in renormalizable Quantum Field Theories requires extensions of multiply nested harmonic sums, which can be generated as real representations by Mellin transforms of Poincare-iterated integrals including denominators of higher cyclotomic polynomials. We derive the cyclotomic harmonic polylogarithms and harmonic sums and study their algebraic and structural relations. The analytic continuation of cyclotomic harmonic sums to complex values of N is performed using analytic representations. We also consider special values of the cyclotomic harmonic polylogarithms at argument x=1, resp., for the cyclotomic harmonic sums at N{yields}{infinity}, which are related to colored multiple zeta values, deriving various of their relations, based on the stuffle and shuffle algebras and three multiple argument relations. We also consider infinite generalized nested harmonic sums at roots of unity which are related to the infinite cyclotomic harmonic sums. Basis representations are derived for weight w=1,2 sums up to cyclotomy l=20. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solis Sanches, L. O.; Miranda, R. Castaneda; Cervantes Viramontes, J. M. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac (Mexico); Vega-Carrillo, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac., Mexico. and Unidad Academica de Estudios Nucleares. C. Cip (Mexico)
2013-07-03
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural
International Nuclear Information System (INIS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-01-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural
Symmetric integrable-polynomial factorization for symplectic one-turn-map tracking
International Nuclear Information System (INIS)
Shi, Jicong
1993-01-01
It was found that any homogeneous polynomial can be written as a sum of integrable polynomials of the same degree which Lie transformations can be evaluated exactly. By utilizing symplectic integrators, an integrable-polynomial factorization is developed to convert a symplectic map in the form of Dragt-Finn factorization into a product of Lie transformations associated with integrable polynomials. A small number of factorization bases of integrable polynomials enable one to use high order symplectic integrators so that the high-order spurious terms can be greatly suppressed. A symplectic map can thus be evaluated with desired accuracy
Invariants, Attractors and Bifurcation in Two Dimensional Maps with Polynomial Interaction
Hacinliyan, Avadis Simon; Aybar, Orhan Ozgur; Aybar, Ilknur Kusbeyzi
This work will present an extended discrete-time analysis on maps and their generalizations including iteration in order to better understand the resulting enrichment of the bifurcation properties. The standard concepts of stability analysis and bifurcation theory for maps will be used. Both iterated maps and flows are used as models for chaotic behavior. It is well known that when flows are converted to maps by discretization, the equilibrium points remain the same but a richer bifurcation scheme is observed. For example, the logistic map has a very simple behavior as a differential equation but as a map fold and period doubling bifurcations are observed. A way to gain information about the global structure of the state space of a dynamical system is investigating invariant manifolds of saddle equilibrium points. Studying the intersections of the stable and unstable manifolds are essential for understanding the structure of a dynamical system. It has been known that the Lotka-Volterra map and systems that can be reduced to it or its generalizations in special cases involving local and polynomial interactions admit invariant manifolds. Bifurcation analysis of this map and its higher iterates can be done to understand the global structure of the system and the artifacts of the discretization by comparing with the corresponding results from the differential equation on which they are based.
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
Quantum Hurwitz numbers and Macdonald polynomials
Harnad, J.
2016-11-01
Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.
International Nuclear Information System (INIS)
Messiaen, A.; Vervier, M.; Dumortier, P.; Grine, D.; Lamalle, P.U.; Durodie, F.; Koch, R.; Louche, F.; Weynants, R.
2009-01-01
The reference design for the ICRF antenna of ITER is constituted by a tight array of 24 straps grouped in eight triplets. The matching network must be load resilient for operation in ELMy discharges and must have antenna spectrum control for heating or current drive operation. The load resilience is based on the use of either hybrid couplers or conjugate-T circuits. However, the mutual coupling between the triplets at the low expected loading strongly counteracts the load resilience and the spectrum control. Using a mock-up of the ITER antenna array with adjustable water load matching solutions are designed. These solutions are derived from transmission line modelling based on the measured scattering matrix and are finally tested. We show that the array current spectrum can be controlled by the anti-node voltage distribution and that suitable decoupler circuits can not only neutralize the adverse mutual coupling effects but also monitor this anti-node voltage distribution. A matching solution using four 3 dB hybrids and the antenna current spectrum feedback control by the decouplers provides outstanding performance if each pair of poloidal triplets undergoes a same load variation. Finally, it is verified by modelling that this matching scenario has the same antenna spectrum and load resilience performances as the antenna array loaded by plasma as described by the TOPICA simulation. This is true for any phasing and frequency in the ITER frequency band. The conjugate-T solution is presently considered as a back-up option.
Directory of Open Access Journals (Sweden)
Wan Xiaohua
2012-06-01
Full Text Available Abstract Background Three-dimensional (3D reconstruction in electron tomography (ET has emerged as a leading technique to elucidate the molecular structures of complex biological specimens. Blob-based iterative methods are advantageous reconstruction methods for 3D reconstruction in ET, but demand huge computational costs. Multiple graphic processing units (multi-GPUs offer an affordable platform to meet these demands. However, a synchronous communication scheme between multi-GPUs leads to idle GPU time, and a weighted matrix involved in iterative methods cannot be loaded into GPUs especially for large images due to the limited available memory of GPUs. Results In this paper we propose a multilevel parallel strategy combined with an asynchronous communication scheme and a blob-ELLR data structure to efficiently perform blob-based iterative reconstructions on multi-GPUs. The asynchronous communication scheme is used to minimize the idle GPU time so as to asynchronously overlap communications with computations. The blob-ELLR data structure only needs nearly 1/16 of the storage space in comparison with ELLPACK-R (ELLR data structure and yields significant acceleration. Conclusions Experimental results indicate that the multilevel parallel scheme combined with the asynchronous communication scheme and the blob-ELLR data structure allows efficient implementations of 3D reconstruction in ET on multi-GPUs.
Chromatic polynomials of random graphs
International Nuclear Information System (INIS)
Van Bussel, Frank; Fliegner, Denny; Timme, Marc; Ehrlich, Christoph; Stolzenberg, Sebastian
2010-01-01
Chromatic polynomials and related graph invariants are central objects in both graph theory and statistical physics. Computational difficulties, however, have so far restricted studies of such polynomials to graphs that were either very small, very sparse or highly structured. Recent algorithmic advances (Timme et al 2009 New J. Phys. 11 023001) now make it possible to compute chromatic polynomials for moderately sized graphs of arbitrary structure and number of edges. Here we present chromatic polynomials of ensembles of random graphs with up to 30 vertices, over the entire range of edge density. We specifically focus on the locations of the zeros of the polynomial in the complex plane. The results indicate that the chromatic zeros of random graphs have a very consistent layout. In particular, the crossing point, the point at which the chromatic zeros with non-zero imaginary part approach the real axis, scales linearly with the average degree over most of the density range. While the scaling laws obtained are purely empirical, if they continue to hold in general there are significant implications: the crossing points of chromatic zeros in the thermodynamic limit separate systems with zero ground state entropy from systems with positive ground state entropy, the latter an exception to the third law of thermodynamics.
Thermal stress analysis of gravity support system for ITER based on ANSYS
International Nuclear Information System (INIS)
Liang Shangming; Yan Xijiang; Huang Yufeng; Wang Xianzhou; Hou Binglin; Li Pengyuan; Jian Guangde; Liu Dequan; Zhou Caipin
2009-01-01
A method for building the finite element model of the gravity support system for International Thermonuclear Experimental Reactor (ITER) was proposed according to the characteristics of the gravity support system with the cyclic symmetry. A mesh dividing method, which has high precision and an acceptable calculating scale, was used, and a three dimensional finite element model for the toroidal 20 degree sector of the gravity support system was built by using ANSYS. Meantime, the steady-state thermal analysis and thermal-structural coupling analysis of the gravity support system were performed. The thermal stress distributions and the maximal thermal stress values of all parts of the gravity support system were obtained, and the stress intensity of parts of the gravity support system was analyzed. The results of thermal stress analysis lay the solid foundation for design and improvement for gravity supports system for ITER. (authors)
The structure analysis of ITER cryostat based on the finite element method
International Nuclear Information System (INIS)
Liang Chao; Ye, M.Y.; Yao, D.M.; Cao, Lei; Zhou, Z.B.; Xu, Teijun; Wang Jian
2013-01-01
In the ITER project the cryostat is one of the most important components. Cryostat shall transfer all the loads that derive from the TOKAMAK inner basic machine, and from the cryostat itself, to the floor of the TOKAMAK pit (during the normal and off-normal operational regimes, and at specified accidental conditions). This paper researches the dynamic structure strength of the ITER cryostat during the operation of TOKAMAK. Firstly the paper introduces the types of loads and the importance of every type load to the research. Then it gives out the method of building model and principle of simplified model, boundary conditions and the way of applying loads on the cryostat. Finally the author discussed the analysis result and the strength questions of cryostat, also, the author pointed out the opinions according to the analysis results.
A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers
Directory of Open Access Journals (Sweden)
Lottici Vincenzo
2005-01-01
Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
International Nuclear Information System (INIS)
Gordon, C.W.
2005-01-01
ITER was fortunate to have four countries interested in ITER siting to the point where licensing discussions were initiated. This experience uncovered the challenges of licensing a first of a kind, fusion machine under different licensing regimes and helped prepare the way for the site specific licensing process. These initial steps in licensing ITER have allowed for refining the safety case and provide confidence that the design and safety approach will be licensable. With site-specific licensing underway, the necessary regulatory submissions have been defined and are well on the way to being completed. Of course, there is still work to be done and details to be sorted out. However, the informal international discussions to bring both the proponent and regulatory authority up to a common level of understanding have laid the foundation for a licensing process that should proceed smoothly. This paper provides observations from the perspective of the International Team. (author)
Iterative h-minima-based marker-controlled watershed for cell nucleus segmentation.
Koyuncu, Can Fahrettin; Akhan, Ece; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2016-04-01
Automated microscopy imaging systems facilitate high-throughput screening in molecular cellular biology research. The first step of these systems is cell nucleus segmentation, which has a great impact on the success of the overall system. The marker-controlled watershed is a technique commonly used by the previous studies for nucleus segmentation. These studies define their markers finding regional minima on the intensity/gradient and/or distance transform maps. They typically use the h-minima transform beforehand to suppress noise on these maps. The selection of the h value is critical; unnecessarily small values do not sufficiently suppress the noise, resulting in false and oversegmented markers, and unnecessarily large ones suppress too many pixels, causing missing and undersegmented markers. Because cell nuclei show different characteristics within an image, the same h value may not work to define correct markers for all the nuclei. To address this issue, in this work, we propose a new watershed algorithm that iteratively identifies its markers, considering a set of different h values. In each iteration, the proposed algorithm defines a set of candidates using a particular h value and selects the markers from those candidates provided that they fulfill the size requirement. Working with widefield fluorescence microscopy images, our experiments reveal that the use of multiple h values in our iterative algorithm leads to better segmentation results, compared to its counterparts. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
An iterative fast sweeping based eikonal solver for tilted orthorhombic media
Waheed, Umair bin; Yarman, Can Evren; Flagg, Garret
2014-01-01
Computing first-arrival traveltimes of quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization, and requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher-order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We address this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function is updated to capture the effects of the higher order nonlinear terms. We use Aitken extrapolation to speed up the convergence rate of the iterative algorithm. The result is an algorithm for first-arrival traveltime computations in tilted anisotropic media. We demonstrate our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests demonstrate that the proposed method can match the first arrivals obtained by wavefield extrapolation, even for strong anisotropy and complex structures. Therefore, for the cases where oneor two-point ray tracing fails, our method may be a potential substitute for computing traveltimes. Our approach can be extended to anisotropic media with lower symmetries, such as monoclinic or even triclinic media.
An iterative, fast-sweeping-based eikonal solver for 3D tilted anisotropic media
Waheed, Umair bin; Yarman, Can Evren; Flagg, Garret
2015-01-01
Computation of first-arrival traveltimes for quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization - and it requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We addressed this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function was updated to capture the effects of the higher order nonlinear terms. We used Aitken's extrapolation to speed up convergence rate of the iterative algorithm. The result is an algorithm for computing first-arrival traveltimes in tilted anisotropic media. We evaluated the applicability and usefulness of our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests determined that the proposed method matches the first arrivals obtained by wavefield extrapolation, even for strongly anisotropic and highly complex subsurface structures. Thus, for the cases where two-point ray tracing fails, our method can be a potential substitute for computing traveltimes. The approach presented here can be easily extended to compute first-arrival traveltimes for anisotropic media with lower symmetries, such as monoclinic or even the triclinic media.
An iterative, fast-sweeping-based eikonal solver for 3D tilted anisotropic media
Waheed, Umair bin
2015-03-30
Computation of first-arrival traveltimes for quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization - and it requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We addressed this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function was updated to capture the effects of the higher order nonlinear terms. We used Aitken\\'s extrapolation to speed up convergence rate of the iterative algorithm. The result is an algorithm for computing first-arrival traveltimes in tilted anisotropic media. We evaluated the applicability and usefulness of our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests determined that the proposed method matches the first arrivals obtained by wavefield extrapolation, even for strongly anisotropic and highly complex subsurface structures. Thus, for the cases where two-point ray tracing fails, our method can be a potential substitute for computing traveltimes. The approach presented here can be easily extended to compute first-arrival traveltimes for anisotropic media with lower symmetries, such as monoclinic or even the triclinic media.
An iterative fast sweeping based eikonal solver for tilted orthorhombic media
Waheed, Umair bin
2014-08-01
Computing first-arrival traveltimes of quasi-P waves in the presence of anisotropy is important for high-end near-surface modeling, microseismic-source localization, and fractured-reservoir characterization, and requires solving an anisotropic eikonal equation. Anisotropy deviating from elliptical anisotropy introduces higher-order nonlinearity into the eikonal equation, which makes solving the eikonal equation a challenge. We address this challenge by iteratively solving a sequence of simpler tilted elliptically anisotropic eikonal equations. At each iteration, the source function is updated to capture the effects of the higher order nonlinear terms. We use Aitken extrapolation to speed up the convergence rate of the iterative algorithm. The result is an algorithm for first-arrival traveltime computations in tilted anisotropic media. We demonstrate our method on tilted transversely isotropic media and tilted orthorhombic media. Our numerical tests demonstrate that the proposed method can match the first arrivals obtained by wavefield extrapolation, even for strong anisotropy and complex structures. Therefore, for the cases where oneor two-point ray tracing fails, our method may be a potential substitute for computing traveltimes. Our approach can be extended to anisotropic media with lower symmetries, such as monoclinic or even triclinic media.
Energy Technology Data Exchange (ETDEWEB)
Kim, Chankyu; Kim, Yewon [Department of Nuclear and Quantum Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Moon, Myungkook [Neutron Instrumentation Division, KAERI, Daejeon 305-353 (Korea, Republic of); Cho, Gyuseong, E-mail: gscho@kaist.ac.kr [Department of Nuclear and Quantum Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)
2015-09-21
Plastic scintillators have been used for gamma ray detection in the fields of dosimetry and homeland security because of their desired characteristics such as a fast decay time, a low production cost, availability in a large-scale, and a tissue-equivalence. Gaussian energy broadening (GEB) in MCNP simulation is an effective treatment for tallies to calculate the broadened response function of a detector similarly to measured spectra. The full width at half maximum (FWHM) of a photopeak has been generally used to compute input parameters required for the GEB treatment. However, it is hard to find the photopeak in measured gamma spectra with plastic scintillators so that computation of the input parameters for the GEB has to be taken with another way. In this study, an iterative method for the GEB treated MCNP simulation to calculate the response function of a plastic scintillator is suggested. Instead of the photopeak, Compton maximum and Compton edge were used to estimate energy broadening in the measured spectra and to determine the GEB parameters. In a demonstration with a CsI(Tl) scintillator, the proposed iterative simulation showed the similar gamma spectra to the existing method using photopeaks. The proposed method was then applied to a polystyrene scintillator, and the simulation result were in agreement with the measured spectra with only a little iteration.
Polynomial weights and code constructions
DEFF Research Database (Denmark)
Massey, J; Costello, D; Justesen, Jørn
1973-01-01
polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...
Orthogonal Polynomials and Special Functions
Assche, Walter
2003-01-01
The set of lectures from the Summer School held in Leuven in 2002 provide an up-to-date account of recent developments in orthogonal polynomials and special functions, in particular for algorithms for computer algebra packages, 3nj-symbols in representation theory of Lie groups, enumeration, multivariable special functions and Dunkl operators, asymptotics via the Riemann-Hilbert method, exponential asymptotics and the Stokes phenomenon. The volume aims at graduate students and post-docs working in the field of orthogonal polynomials and special functions, and in related fields interacting with orthogonal polynomials, such as combinatorics, computer algebra, asymptotics, representation theory, harmonic analysis, differential equations, physics. The lectures are self-contained requiring only a basic knowledge of analysis and algebra, and each includes many exercises.
Study of a Biparametric Family of Iterative Methods
Directory of Open Access Journals (Sweden)
B. Campos
2014-01-01
Full Text Available The dynamics of a biparametric family for solving nonlinear equations is studied on quadratic polynomials. This biparametric family includes the c-iterative methods and the well-known Chebyshev-Halley family. We find the analytical expressions for the fixed and critical points by solving 6-degree polynomials. We use the free critical points to get the parameter planes and, by observing them, we specify some values of (α, c with clear stable and unstable behaviors.
Energy Technology Data Exchange (ETDEWEB)
Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)
2014-04-15
Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Polynomial fuzzy observer designs: a sum-of-squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O
2012-10-01
This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.
Elsheikh, Ahmed H.
2014-02-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.
A comparison of companion matrix methods to find roots of a trigonometric polynomial
Boyd, John P.
2013-08-01
A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements
Numerical Simulation of Polynomial-Speed Convergence Phenomenon
Li, Yao; Xu, Hui
2017-11-01
We provide a hybrid method that captures the polynomial speed of convergence and polynomial speed of mixing for Markov processes. The hybrid method that we introduce is based on the coupling technique and renewal theory. We propose to replace some estimates in classical results about the ergodicity of Markov processes by numerical simulations when the corresponding analytical proof is difficult. After that, all remaining conclusions can be derived from rigorous analysis. Then we apply our results to seek numerical justification for the ergodicity of two 1D microscopic heat conduction models. The mixing rate of these two models are expected to be polynomial but very difficult to prove. In both examples, our numerical results match the expected polynomial mixing rate well.
Energy Technology Data Exchange (ETDEWEB)
Velikhov, E.P. [Kurchatov Institute of Atomic Energy, Moscow (Russian Federation)
2002-10-01
ITER is the unique and the most straightforward way to study the burning plasma science in the nearest future. ITER has a firm physics ground based on the results from the world tokamaks in terms of confinement, stability, heating, current drive, divertor, energetic particle confinement to an extend required in ITER. The flexibility of ITER will allow the exploration of broad operation space of fusion power, beta, pulse length and Q values in various operational scenarios. Success of the engineering R and D programs has demonstrated that all party has an enough capability to produce all the necessary equipment in agreement with the specifications of ITER. The acquired knowledge and technologies in ITER project allow us to demonstrate the scientific and technical feasibility of a fusion reactor. It can be concluded that ITER must be constructed in the nearest future. (author)
International Nuclear Information System (INIS)
Velikhov, E.P.
2002-01-01
ITER is the unique and the most straightforward way to study the burning plasma science in the nearest future. ITER has a firm physics ground based on the results from the world tokamaks in terms of confinement, stability, heating, current drive, divertor, energetic particle confinement to an extend required in ITER. The flexibility of ITER will allow the exploration of broad operation space of fusion power, beta, pulse length and Q values in various operational scenarios. Success of the engineering R and D programs has demonstrated that all party has an enough capability to produce all the necessary equipment in agreement with the specifications of ITER. The acquired knowledge and technologies in ITER project allow us to demonstrate the scientific and technical feasibility of a fusion reactor. It can be concluded that ITER must be constructed in the nearest future. (author)
ITER ITA newsletter. No. 22, May 2005
International Nuclear Information System (INIS)
2005-06-01
This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about Japanese Participant Team's recent activities in the ITER Transitional Arrangements(ITA) phase and ITER related meeting the Fourth IAEA Technical Meeting (IAEA-TM) on Negative Ion Based Neutral Beam Injectors which was held in Padova, Italy from 9-11 May 2005
Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm
Yang, Pao-Keng
2011-09-01
We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.
Determination of accelerated factors in gradient descent iterations based on Taylor's series
Directory of Open Access Journals (Sweden)
Petrović Milena
2017-01-01
Full Text Available In this paper the efficiency of accelerated gradient descent methods regarding the way of determination of accelerated factor is considered. Due to the previous researches we assert that the use of Taylor's series of posed gradient descent iteration in calculation of accelerated parameter gives better final results than some other choices. We give a comparative analysis of efficiency of several methods with different approaches in obtaining accelerated parameter. According to the achieved results of numerical experiments we make a conclusion about the one of the most optimal way in defining accelerated parameter in accelerated gradient descent schemes.
Liu, Zhengjun; Chen, Hang; Blondel, Walter; Shen, Zhenmin; Liu, Shutian
2018-06-01
A novel image encryption method is proposed by using the expanded fractional Fourier transform, which is implemented with a pair of lenses. Here the centers of two lenses are separated at the cross section of axis in optical system. The encryption system is addressed with Fresnel diffraction and phase modulation for the calculation of information transmission. The iterative process with the transform unit is utilized for hiding secret image. The structure parameters of a battery of lenses can be used for additional keys. The performance of encryption method is analyzed theoretically and digitally. The results show that the security of this algorithm is enhanced markedly by the added keys.
STABILITY SYSTEMS VIA HURWITZ POLYNOMIALS
Directory of Open Access Journals (Sweden)
BALTAZAR AGUIRRE HERNÁNDEZ
2017-01-01
Full Text Available To analyze the stability of a linear system of differential equations ẋ = Ax we can study the location of the roots of the characteristic polynomial pA(t associated with the matrix A. We present various criteria - algebraic and geometric - that help us to determine where the roots are located without calculating them directly.
On Modular Counting with Polynomials
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt
2006-01-01
For any integers m and l, where m has r sufficiently large (depending on l) factors, that are powers of r distinct primes, we give a construction of a (symmetric) polynomial over Z_m of degree O(\\sqrt n) that is a generalized representation (commonly also called weak representation) of the MODl f...
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Congruences concerning Legendre polynomials III
Sun, Zhi-Hong
2010-01-01
Let $p>3$ be a prime, and let $R_p$ be the set of rational numbers whose denominator is coprime to $p$. Let $\\{P_n(x)\\}$ be the Legendre polynomials. In this paper we mainly show that for $m,n,t\\in R_p$ with $m\
Two polynomial division inequalities in
Directory of Open Access Journals (Sweden)
Goetgheluck P
1998-01-01
Full Text Available This paper is a first attempt to give numerical values for constants and , in classical estimates and where is an algebraic polynomial of degree at most and denotes the -metric on . The basic tools are Markov and Bernstein inequalities.
Dirichlet polynomials, majorization, and trumping
International Nuclear Information System (INIS)
Pereira, Rajesh; Plosker, Sarah
2013-01-01
Majorization and trumping are two partial orders which have proved useful in quantum information theory. We show some relations between these two partial orders and generalized Dirichlet polynomials, Mellin transforms, and completely monotone functions. These relations are used to prove a succinct generalization of Turgut’s characterization of trumping. (paper)
3D automatic anatomy segmentation based on iterative graph-cut-ASM.
Chen, Xinjian; Bagci, Ulas
2011-08-01
This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al. [Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 degrees and 0.03, and over all foot bones are about 3.5709 mm, 0.35 degrees and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and all foot bones for
3D automatic anatomy segmentation based on iterative graph-cut-ASM
International Nuclear Information System (INIS)
Chen, Xinjian; Bagci, Ulas
2011-01-01
Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and
Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART
Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.
2017-12-01
Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.
Zhou, Yatong; Han, Chunying; Chi, Yue
2018-06-01
In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.
MPPT-Based Control Algorithm for PV System Using iteration-PSO under Irregular shadow Conditions
Directory of Open Access Journals (Sweden)
M. Abdulkadir
2017-02-01
Full Text Available The conventional maximum power point tracking (MPPT techniques can hardly track the global maximum power point (GMPP because the power-voltage characteristics of photovoltaic (PV exhibit multiple local peaks in irregular shadow, and therefore easily fall into the local maximum power point. These conditions make it very challenging, and to tackle this deficiency, an efficient Iteration Particle Swarm Optimization (IPSO has been developed to improve the quality of solution and convergence speed of the traditional PSO, so that it can effectively track the GMPP under irregular shadow conditions. This proposed technique has such advantages as simple structure, fast response and strong robustness, and convenient implementation. It is applied to MPPT control of PV system in irregular shadow to solve the problem of multi-peak optimization in partial shadow. In order to verify the rationality of the proposed algorithm, however, recently the dynamic MPPT performance under varying irradiance conditions has been given utmost attention to the PV society. As the European standard EN 50530 which defines the recommended varying irradiance profiles, was released lately, the corresponding researchers have been required to improve the dynamic MPPT performance. This paper tried to evaluate the dynamic MPPT performance using EN 50530 standard. The simulation results show that iterative-PSO method can fast track the global MPP, increase tracking speed and higher dynamic MPPT efficiency under EN 50530 than the conventional PSO.
A guidance law for UAV autonomous aerial refueling based on the iterative computation method
Directory of Open Access Journals (Sweden)
Luo Delin
2014-08-01
Full Text Available The rendezvous and formation problem is a significant part for the unmanned aerial vehicle (UAV autonomous aerial refueling (AAR technique. It can be divided into two major phases: the long-range guidance phase and the formation phase. In this paper, an iterative computation guidance law (ICGL is proposed to compute a series of state variables to get the solution of a control variable for a UAV conducting rendezvous with a tanker in AAR. The proposed method can make the control variable converge to zero when the tanker and the UAV receiver come to a formation flight eventually. For the long-range guidance phase, the ICGL divides it into two sub-phases: the correction sub-phase and the guidance sub-phase. The two sub-phases share the same iterative process. As for the formation phase, a velocity coordinate system is created by which control accelerations are designed to make the speed of the UAV consistent with that of the tanker. The simulation results demonstrate that the proposed ICGL is effective and robust against wind disturbance.
Neural spike sorting using iterative ICA and a deflation-based approach.
Tiganj, Z; Mboup, M
2012-12-01
We propose a spike sorting method for multi-channel recordings. When applied in neural recordings, the performance of the independent component analysis (ICA) algorithm is known to be limited, since the number of recording sites is much lower than the number of neurons. The proposed method uses an iterative application of ICA and a deflation technique in two nested loops. In each iteration of the external loop, the spiking activity of one neuron is singled out and then deflated from the recordings. The internal loop implements a sequence of ICA and sorting for removing the noise and all the spikes that are not fired by the targeted neuron. Then a final step is appended to the two nested loops in order to separate simultaneously fired spikes. We solve this problem by taking all possible pairs of the sorted neurons and apply ICA only on the segments of the signal during which at least one of the neurons in a given pair was active. We validate the performance of the proposed method on simulated recordings, but also on a specific type of real recordings: simultaneous extracellular-intracellular. We quantify the sorting results on the extracellular recordings for the spikes that come from the neurons recorded intracellularly. The results suggest that the proposed solution significantly improves the performance of ICA in spike sorting.
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
The modified Gauss diagonalization of polynomial matrices
International Nuclear Information System (INIS)
Saeed, K.
1982-10-01
The Gauss algorithm for diagonalization of constant matrices is modified for application to polynomial matrices. Due to this modification the diagonal elements become pure polynomials rather than rational functions. (author)
Sheffer and Non-Sheffer Polynomial Families
Directory of Open Access Journals (Sweden)
G. Dattoli
2012-01-01
Full Text Available By using the integral transform method, we introduce some non-Sheffer polynomial sets. Furthermore, we show how to compute the connection coefficients for particular expressions of Appell polynomials.
The finite Fourier transform of classical polynomials
Dixit, Atul; Jiu, Lin; Moll, Victor H.; Vignat, Christophe
2014-01-01
The finite Fourier transform of a family of orthogonal polynomials $A_{n}(x)$, is the usual transform of the polynomial extended by $0$ outside their natural domain. Explicit expressions are given for the Legendre, Jacobi, Gegenbauer and Chebyshev families.
A Summation Formula for Macdonald Polynomials
de Gier, Jan; Wheeler, Michael
2016-03-01
We derive an explicit sum formula for symmetric Macdonald polynomials. Our expression contains multiple sums over the symmetric group and uses the action of Hecke generators on the ring of polynomials. In the special cases {t = 1} and {q = 0}, we recover known expressions for the monomial symmetric and Hall-Littlewood polynomials, respectively. Other specializations of our formula give new expressions for the Jack and q-Whittaker polynomials.
A New Generalisation of Macdonald Polynomials
Garbali, Alexandr; de Gier, Jan; Wheeler, Michael
2017-06-01
We introduce a new family of symmetric multivariate polynomials, whose coefficients are meromorphic functions of two parameters ( q, t) and polynomial in a further two parameters ( u, v). We evaluate these polynomials explicitly as a matrix product. At u = v = 0 they reduce to Macdonald polynomials, while at q = 0, u = v = s they recover a family of inhomogeneous symmetric functions originally introduced by Borodin.
Associated polynomials and birth-death processes
van Doorn, Erik A.
2001-01-01
We consider sequences of orthogonal polynomials with positive zeros, and pursue the question of how (partial) knowledge of the orthogonalizing measure for the {\\it associated polynomials} can lead to information about the orthogonalizing measure for the original polynomials, with a view to
Stability of Mixed-Strategy-Based Iterative Logit Quantal Response Dynamics in Game Theory
Zhuang, Qian; Di, Zengru; Wu, Jinshan
2014-01-01
Using the Logit quantal response form as the response function in each step, the original definition of static quantal response equilibrium (QRE) is extended into an iterative evolution process. QREs remain as the fixed points of the dynamic process. However, depending on whether such fixed points are the long-term solutions of the dynamic process, they can be classified into stable (SQREs) and unstable (USQREs) equilibriums. This extension resembles the extension from static Nash equilibriums (NEs) to evolutionary stable solutions in the framework of evolutionary game theory. The relation between SQREs and other solution concepts of games, including NEs and QREs, is discussed. Using experimental data from other published papers, we perform a preliminary comparison between SQREs, NEs, QREs and the observed behavioral outcomes of those experiments. For certain games, we determine that SQREs have better predictive power than QREs and NEs. PMID:25157502
A dimension decomposition approach based on iterative observer design for an elliptic Cauchy problem
Majeed, Muhammad Usman
2015-07-13
A state observer inspired iterative algorithm is presented to solve boundary estimation problem for Laplace equation using one of the space variables as a time-like variable. Three dimensional domain with two congruent parallel surfaces is considered. Problem is set up in cartesian co-ordinates and Laplace equation is re-written as a first order state equation with state operator matrix A and measurements are provided on the Cauchy data surface with measurement operator C. Conditions for the existence of strongly continuous semigroup generated by A are studied. Observability conditions for pair (C, A) are provided in infinite dimensional setting. In this given setting, special observability result obtained allows to decompose three dimensional problem into a set of independent two dimensional sub-problems over rectangular cross-sections. Numerical simulation results are provided.
PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry
International Nuclear Information System (INIS)
Leal, A.; Sanchez-Doblado, F.; Perucha, M.; Rincon, M.; Carrasco, E.; Bernal, C.
2001-01-01
A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)
Cha, Jongsub; Park, Kyungho; Kang, Joonhyuk; Park, Hyuncheol
In this letter, we propose two computationally efficient precoding algorithms that achieve near-ML performance for multiuser MIMO downlink. The proposed algorithms perform tree expansion after lattice reduction. The first full expansion is tried by selecting the first level node with a minimum metric, constituting a reference metric. To find an optimal sequence, they iteratively visit each node and terminate the expansion by comparing node metrics with the calculated reference metric. By doing this, they significantly reduce the number of undesirable node visit. Monte-Carlo simulations show that both proposed algorithms yield near-ML performance with considerable reduction in complexity compared with that of the conventional schemes such as sphere encoding.
A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.
Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John
2016-09-08
The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.
A noise power spectrum study of a new model‐based iterative reconstruction system: Veo 3.0
Li, Guang; Liu, Xinming; Dodge, Cristina T.; Jensen, Corey T.
2016-01-01
The purpose of this study was to evaluate performance of the third generation of model‐based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat‐equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low‐dose levels. PACS number(s): 87.57.‐s, 87.57.Q‐, 87.57.C‐, 87.57.nf, 87.57.C‐, 87.57.cm PMID:27685118
Conditional Density Approximations with Mixtures of Polynomials
DEFF Research Database (Denmark)
Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre
2015-01-01
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...
BSDEs with polynomial growth generators
Directory of Open Access Journals (Sweden)
Philippe Briand
2000-01-01
Full Text Available In this paper, we give existence and uniqueness results for backward stochastic differential equations when the generator has a polynomial growth in the state variable. We deal with the case of a fixed terminal time, as well as the case of random terminal time. The need for this type of extension of the classical existence and uniqueness results comes from the desire to provide a probabilistic representation of the solutions of semilinear partial differential equations in the spirit of a nonlinear Feynman-Kac formula. Indeed, in many applications of interest, the nonlinearity is polynomial, e.g, the Allen-Cahn equation or the standard nonlinear heat and Schrödinger equations.
Quantum entanglement via nilpotent polynomials
International Nuclear Information System (INIS)
Mandilara, Aikaterini; Akulin, Vladimir M.; Smilga, Andrei V.; Viola, Lorenza
2006-01-01
We propose a general method for introducing extensive characteristics of quantum entanglement. The method relies on polynomials of nilpotent raising operators that create entangled states acting on a reference vacuum state. By introducing the notion of tanglemeter, the logarithm of the state vector represented in a special canonical form and expressed via polynomials of nilpotent variables, we show how this description provides a simple criterion for entanglement as well as a universal method for constructing the invariants characterizing entanglement. We compare the existing measures and classes of entanglement with those emerging from our approach. We derive the equation of motion for the tanglemeter and, in representative examples of up to four-qubit systems, show how the known classes appear in a natural way within our framework. We extend our approach to qutrits and higher-dimensional systems, and make contact with the recently introduced idea of generalized entanglement. Possible future developments and applications of the method are discussed
Special polynomials associated with some hierarchies
International Nuclear Information System (INIS)
Kudryashov, Nikolai A.
2008-01-01
Special polynomials associated with rational solutions of a hierarchy of equations of Painleve type are introduced. The hierarchy arises by similarity reduction from the Fordy-Gibbons hierarchy of partial differential equations. Some relations for these special polynomials are given. Differential-difference hierarchies for finding special polynomials are presented. These formulae allow us to obtain special polynomials associated with the hierarchy studied. It is shown that rational solutions of members of the Schwarz-Sawada-Kotera, the Schwarz-Kaup-Kupershmidt, the Fordy-Gibbons, the Sawada-Kotera and the Kaup-Kupershmidt hierarchies can be expressed through special polynomials of the hierarchy studied
Space complexity in polynomial calculus
Czech Academy of Sciences Publication Activity Database
Filmus, Y.; Lauria, M.; Nordström, J.; Ron-Zewi, N.; Thapen, Neil
2015-01-01
Roč. 44, č. 4 (2015), s. 1119-1153 ISSN 0097-5397 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : proof complexity * polynomial calculus * lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.841, year: 2015 http://epubs.siam.org/doi/10.1137/120895950
Codimensions of generalized polynomial identities
International Nuclear Information System (INIS)
Gordienko, Aleksei S
2010-01-01
It is proved that for every finite-dimensional associative algebra A over a field of characteristic zero there are numbers C element of Q + and t element of Z + such that gc n (A)∼Cn t d n as n→∞, where d=PI exp(A) element of Z + . Thus, Amitsur's and Regev's conjectures hold for the codimensions gc n (A) of the generalized polynomial identities. Bibliography: 6 titles.
Variational iteration method for one dimensional nonlinear thermoelasticity
International Nuclear Information System (INIS)
Sweilam, N.H.; Khader, M.M.
2007-01-01
This paper applies the variational iteration method to solve the Cauchy problem arising in one dimensional nonlinear thermoelasticity. The advantage of this method is to overcome the difficulty of calculation of Adomian's polynomials in the Adomian's decomposition method. The numerical results of this method are compared with the exact solution of an artificial model to show the efficiency of the method. The approximate solutions show that the variational iteration method is a powerful mathematical tool for solving nonlinear problems
Worst-case Analysis of Strategy Iteration and the Simplex Method
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm
In this dissertation we study strategy iteration (also known as policy iteration) algorithms for solving Markov decision processes (MDPs) and two-player turn-based stochastic games (2TBSGs). MDPs provide a mathematical model for sequential decision making under uncertainty. They are widely used...... to model stochastic optimization problems in various areas ranging from operations research, machine learning, artificial intelligence, economics and game theory. The class of two-player turn-based stochastic games is a natural generalization of Markov decision processes that is obtained by introducing...... in the size of the problem (the bounds have subexponential form). Utilizing a tight connection between MDPs and linear programming, it is shown that the same bounds apply to the corresponding pivoting rules for the simplex method for solving linear programs. Prior to this result no super-polynomial lower...
Stable piecewise polynomial vector fields
Directory of Open Access Journals (Sweden)
Claudio Pessoa
2012-09-01
Full Text Available Let $N={y>0}$ and $S={y<0}$ be the semi-planes of $mathbb{R}^2$ having as common boundary the line $D={y=0}$. Let $X$ and $Y$ be polynomial vector fields defined in $N$ and $S$, respectively, leading to a discontinuous piecewise polynomial vector field $Z=(X,Y$. This work pursues the stability and the transition analysis of solutions of $Z$ between $N$ and $S$, started by Filippov (1988 and Kozlova (1984 and reformulated by Sotomayor-Teixeira (1995 in terms of the regularization method. This method consists in analyzing a one parameter family of continuous vector fields $Z_{epsilon}$, defined by averaging $X$ and $Y$. This family approaches $Z$ when the parameter goes to zero. The results of Sotomayor-Teixeira and Sotomayor-Machado (2002 providing conditions on $(X,Y$ for the regularized vector fields to be structurally stable on planar compact connected regions are extended to discontinuous piecewise polynomial vector fields on $mathbb{R}^2$. Pertinent genericity results for vector fields satisfying the above stability conditions are also extended to the present case. A procedure for the study of discontinuous piecewise vector fields at infinity through a compactification is proposed here.
Hamed Kharrati; Sohrab Khanmohammadi; Witold Pedrycz; Ghasem Alizadeh
2012-01-01
This study presents an improved model and controller for nonlinear plants using polynomial fuzzy model-based (FMB) systems. To minimize mismatch between the polynomial fuzzy model and nonlinear plant, the suitable parameters of membership functions are determined in a systematic way. Defining an appropriate fitness function and utilizing Taylor series expansion, a genetic algorithm (GA) is used to form the shape of membership functions in polynomial forms, which are afterwards used in fuzzy m...
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite
International Nuclear Information System (INIS)
Oda, Seitaro; Weissman, Gaby; Weigold, W. Guy; Vembar, Mani
2015-01-01
The purpose of this study was to investigate the effects of knowledge-based iterative model reconstruction (IMR) on image quality in cardiac CT performed for the planning of redo cardiac surgery by comparing IMR images with images reconstructed with filtered back-projection (FBP) and hybrid iterative reconstruction (HIR). We studied 31 patients (23 men, 8 women; mean age 65.1 ± 16.5 years) referred for redo cardiac surgery who underwent cardiac CT. Paired image sets were created using three types of reconstruction: FBP, HIR, and IMR. Quantitative parameters including CT attenuation, image noise, and contrast-to-noise ratio (CNR) of each cardiovascular structure were calculated. The visual image quality - graininess, streak artefact, margin sharpness of each cardiovascular structure, and overall image quality - was scored on a five-point scale. The mean image noise of FBP, HIR, and IMR images was 58.3 ± 26.7, 36.0 ± 12.5, and 14.2 ± 5.5 HU, respectively; there were significant differences in all comparison combinations among the three methods. The CNR of IMR images was better than that of FBP and HIR images in all evaluated structures. The visual scores were significantly higher for IMR than for the other images in all evaluated parameters. IMR can provide significantly improved qualitative and quantitative image quality at in cardiac CT for planning of reoperative cardiac surgery. (orig.)
Directory of Open Access Journals (Sweden)
Bin Yan
2015-01-01
Full Text Available Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT. Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.
Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.
Temel, Burcin; Mills, Greg; Metiu, Horia
2008-03-27
We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.
Synchronization of generalized Henon map using polynomial controller
International Nuclear Information System (INIS)
Lam, H.K.
2010-01-01
This Letter presents the chaos synchronization of two discrete-time generalized Henon map, namely the drive and response systems. A polynomial controller is proposed to drive the system states of the response system to follow those of the drive system. The system stability of the error system formed by the drive and response systems and the synthesis of the polynomial controller are investigated using the sum-of-squares (SOS) technique. Based on the Lyapunov stability theory, stability conditions in terms of SOS are derived to guarantee the system stability and facilitate the controller synthesis. By satisfying the SOS-based stability conditions, chaotic synchronization is achieved. The solution of the SOS-based stability conditions can be found numerically using the third-party Matlab toolbox SOSTOOLS. A simulation example is given to illustrate the merits of the proposed polynomial control approach.
International Nuclear Information System (INIS)
Pozdeyev, Mikhail
2002-01-01
Full text: Participating in the film are Academicians Velikhov and Glukhikh, Mr. Filatof, ITER Director from Russia, Mr. Sannikov from Kurchatov Institute. The film tells about the starting point of the project (Mr. Lavrentyev), the pioneers of the project (Academicians Tamme, Sakharov, Artsimovich) and about the situation the project is standing now. Participating in [ITER now are the US, Russia, Japan and the European Union. There are two associated members as well - Kazakhstan and Canada. By now the engineering design phase has been finished. Computer animation used in the video gives us the idea how the first thermonuclear reactor based on famous Russian TOKOMAK works. (author)
Closed-form estimates of the domain of attraction for nonlinear systems via fuzzy-polynomial models.
Pitarch, José Luis; Sala, Antonio; Ariño, Carlos Vicente
2014-04-01
In this paper, the domain of attraction of the origin of a nonlinear system is estimated in closed form via level sets with polynomial boundaries, iteratively computed. In particular, the domain of attraction is expanded from a previous estimate, such as a classical Lyapunov level set. With the use of fuzzy-polynomial models, the domain of attraction analysis can be carried out via sum of squares optimization and an iterative algorithm. The result is a function that bounds the domain of attraction, free from the usual restriction of being positive and decrescent in all the interior of its level sets.
International Nuclear Information System (INIS)
Paul, Sabyasachi; Sarkar, P.K.
2012-05-01
The characterization of radionuclide in the in-vivo monitoring analysis using gamma spectrometry poses difficulty due to very low activity level in biological systems. The large statistical fluctuations often make identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet based noise filtering methodology has been developed for better detection of gamma peaks while analyzing noisy spectrometric data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for the noise rejection and inverse transform after soft thresholding over the generated coefficients. Analyses of in-vivo monitoring data of 235 U and 238 U have been carried out using this method without disturbing the peak position and amplitude while achieving a threefold improvement in the signal to noise ratio, compared to the original measured spectrum. When compared with other data filtering techniques, the wavelet based method shows better results. (author)
Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.
Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui
2006-01-01
This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.
Solving the interval type-2 fuzzy polynomial equation using the ranking method
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-07-01
Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.
Multivariate Local Polynomial Regression with Application to Shenzhen Component Index
Directory of Open Access Journals (Sweden)
Liyun Su
2011-01-01
Full Text Available This study attempts to characterize and predict stock index series in Shenzhen stock market using the concepts of multivariate local polynomial regression. Based on nonlinearity and chaos of the stock index time series, multivariate local polynomial prediction methods and univariate local polynomial prediction method, all of which use the concept of phase space reconstruction according to Takens' Theorem, are considered. To fit the stock index series, the single series changes into bivariate series. To evaluate the results, the multivariate predictor for bivariate time series based on multivariate local polynomial model is compared with univariate predictor with the same Shenzhen stock index data. The numerical results obtained by Shenzhen component index show that the prediction mean squared error of the multivariate predictor is much smaller than the univariate one and is much better than the existed three methods. Even if the last half of the training data are used in the multivariate predictor, the prediction mean squared error is smaller than the univariate predictor. Multivariate local polynomial prediction model for nonsingle time series is a useful tool for stock market price prediction.
International Nuclear Information System (INIS)
Kirov, A S; Schmidtlein, C R; Piao, J Z
2008-01-01
Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for
Energy Technology Data Exchange (ETDEWEB)
Jia, Qianjun, E-mail: jiaqianjun@126.com [Southern Medical University, Guangzhou, Guangdong (China); Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Zhuang, Jian, E-mail: zhuangjian5413@tom.com [Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Jiang, Jun, E-mail: 81711587@qq.com [Department of Radiology, Shenzhen Second People’s Hospital, Shenzhen, Guangdong (China); Li, Jiahua, E-mail: 970872804@qq.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Huang, Meiping, E-mail: huangmeiping_vip@163.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China); Liang, Changhong, E-mail: cjr.lchh@vip.163.com [Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China)
2017-01-15
Purpose: To compare the image quality, rate of coronary artery visualization and diagnostic accuracy of 256-slice multi-detector computed tomography angiography (CTA) with prospective electrocardiographic (ECG) triggering at a tube voltage of 80 kVp between 3 reconstruction algorithms (filtered back projection (FBP), hybrid iterative reconstruction (iDose{sup 4}) and iterative model reconstruction (IMR)) in infants with congenital heart disease (CHD). Methods: Fifty-one infants with CHD who underwent cardiac CTA in our institution between December 2014 and March 2015 were included. The effective radiation doses were calculated. Imaging data were reconstructed using the FBP, iDose{sup 4} and IMR algorithms. Parameters of objective image quality (noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR)); subjective image quality (overall image quality, image noise and margin sharpness); coronary artery visibility; and diagnostic accuracy for the three algorithms were measured and compared. Results: The mean effective radiation dose was 0.61 ± 0.32 mSv. Compared to FBP and iDose{sup 4}, IMR yielded significantly lower noise (P < 0.01), higher SNR and CNR values (P < 0.01), and a greater subjective image quality score (P < 0.01). The total number of coronary segments visualized was significantly higher for both iDose{sup 4} and IMR than for FBP (P = 0.002 and P = 0.025, respectively), but there was no significant difference in this parameter between iDose{sup 4} and IMR (P = 0.397). There was no significant difference in the diagnostic accuracy between the FBP, iDose{sup 4} and IMR algorithms (χ{sup 2} = 0.343, P = 0.842). Conclusions: For infants with CHD undergoing cardiac CTA, the IMR reconstruction algorithm provided significantly increased objective and subjective image quality compared with the FBP and iDose{sup 4} algorithms. However, IMR did not improve the diagnostic accuracy or coronary artery visualization compared with iDose{sup 4}.
Discrete-Time Filter Synthesis using Product of Gegenbauer Polynomials
Directory of Open Access Journals (Sweden)
N. Stojanovic
2016-09-01
Full Text Available A new approximation to design continuoustime and discrete-time low-pass filters, presented in this paper, based on the product of Gegenbauer polynomials, provides the ability of more flexible adjustment of passband and stopband responses. The design is achieved taking into account a prescribed specification, leading to a better trade-off among the magnitude and group delay responses. Many well-known continuous-time and discrete-time transitional filter based on the classical polynomial approximations(Chebyshev, Legendre, Butterworth are shown to be a special cases of proposed approximation method.
M-Polynomial and Related Topological Indices of Nanostar Dendrimers
Directory of Open Access Journals (Sweden)
Mobeen Munir
2016-09-01
Full Text Available Dendrimers are highly branched organic macromolecules with successive layers of branch units surrounding a central core. The M-polynomial of nanotubes has been vastly investigated as it produces many degree-based topological indices. These indices are invariants of the topology of graphs associated with molecular structure of nanomaterials to correlate certain physicochemical properties like boiling point, stability, strain energy, etc. of chemical compounds. In this paper, we first determine M-polynomials of some nanostar dendrimers and then recover many degree-based topological indices.
Szász-Durrmeyer operators involving Boas-Buck polynomials of blending type.
Sidharth, Manjari; Agrawal, P N; Araci, Serkan
2017-01-01
The present paper introduces the Szász-Durrmeyer type operators based on Boas-Buck type polynomials which include Brenke type polynomials, Sheffer polynomials and Appell polynomials considered by Sucu et al. (Abstr. Appl. Anal. 2012:680340, 2012). We establish the moments of the operator and a Voronvskaja type asymptotic theorem and then proceed to studying the convergence of the operators with the help of Lipschitz type space and weighted modulus of continuity. Next, we obtain a direct approximation theorem with the aid of unified Ditzian-Totik modulus of smoothness. Furthermore, we study the approximation of functions whose derivatives are locally of bounded variation.
Szász-Durrmeyer operators involving Boas-Buck polynomials of blending type
Directory of Open Access Journals (Sweden)
Manjari Sidharth
2017-05-01
Full Text Available Abstract The present paper introduces the Szász-Durrmeyer type operators based on Boas-Buck type polynomials which include Brenke type polynomials, Sheffer polynomials and Appell polynomials considered by Sucu et al. (Abstr. Appl. Anal. 2012:680340, 2012. We establish the moments of the operator and a Voronvskaja type asymptotic theorem and then proceed to studying the convergence of the operators with the help of Lipschitz type space and weighted modulus of continuity. Next, we obtain a direct approximation theorem with the aid of unified Ditzian-Totik modulus of smoothness. Furthermore, we study the approximation of functions whose derivatives are locally of bounded variation.
H∞ Control of Polynomial Fuzzy Systems: A Sum of Squares Approach
Directory of Open Access Journals (Sweden)
Bomo W. Sanjaya
2014-07-01
Full Text Available This paper proposes the control design ofa nonlinear polynomial fuzzy system with H∞ performance objective using a sum of squares (SOS approach. Fuzzy model and controller are represented by a polynomial fuzzy model and controller. The design condition is obtained by using polynomial Lyapunov functions that not only guarantee stability but also satisfy the H∞ performance objective. The design condition is represented in terms of an SOS that can be numerically solved via the SOSTOOLS. A simulation study is presented to show the effectiveness of the SOS-based H∞ control designfor nonlinear polynomial fuzzy systems.
Fast computation of the roots of polynomials over the ring of power series
DEFF Research Database (Denmark)
Neiger, Vincent; Rosenkilde, Johan; Schost, Éric
2017-01-01
We give an algorithm for computing all roots of polynomials over a univariate power series ring over an exact field K. More precisely, given a precision d, and a polynomial Q whose coefficients are power series in x, the algorithm computes a representation of all power series f(x) such that Q......(f(x)) = 0 mod xd. The algorithm works unconditionally, in particular also with multiple roots, where Newton iteration fails. Our main motivation comes from coding theory where instances of this problem arise and multiple roots must be handled. The cost bound for our algorithm matches the worst-case input...
Solutions of interval type-2 fuzzy polynomials using a new ranking method
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Jiang, Rui; McKanna, James; Calabrese, Samantha; Seif El-Nasr, Magy
2017-08-01
Herein we describe a methodology for developing a game-based intervention to raise awareness of Chlamydia and other sexually transmitted infections among youth in Boston's underserved communities. We engaged in three design-based experiments. These utilized mixed methods, including playtesting and assessment methods, to examine the overall effectiveness of the game. In this case, effectiveness is defined as (1) engaging the target group, (2) increasing knowledge about Chlamydia, and (3) changing attitudes toward Chlamydia testing. These three experiments were performed using participants from different communities and with slightly different versions of the game, as we iterated through the design/feedback process. Overall, participants who played the game showed a significant increase in participants' knowledge of Chlamydia compared with those in the control group (P = 0.0002). The version of the game, including elements specifically targeting systemic thinking, showed significant improvement in participants' intent to get tested compared with the version of the game without such elements (Stage 2: P > 0.05; Stage 3: P = 0.0045). Furthermore, during both Stage 2 and Stage 3, participants showed high levels of enjoyment, mood, and participation and moderate levels of game engagement and social engagement. During Stage 3, however, participants' game engagement (P = 0.0003), social engagement (P = 0.0003), and participation (P = 0.0003) were significantly higher compared with those of Stage 2. Thus, we believe that motivation improvements from Stage 2 to 3 were also effective. Finally, participants' overall learning effectiveness was correlated with their prepositive affect (r = 0.52) and their postproblem hierarchy (r = -0.54). The game improved considerably from its initial conception through three stages of iterative design and feedback. Our assessment methods for each stage targeted and integrated learning, health, and engagement
Algebraic polynomials with random coefficients
Directory of Open Access Journals (Sweden)
K. Farahmand
2002-01-01
Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.
Fourier series and orthogonal polynomials
Jackson, Dunham
2004-01-01
This text for undergraduate and graduate students illustrates the fundamental simplicity of the properties of orthogonal functions and their developments in related series. Starting with a definition and explanation of the elements of Fourier series, the text follows with examinations of Legendre polynomials and Bessel functions. Boundary value problems consider Fourier series in conjunction with Laplace's equation in an infinite strip and in a rectangle, with a vibrating string, in three dimensions, in a sphere, and in other circumstances. An overview of Pearson frequency functions is followe
Killings, duality and characteristic polynomials
Álvarez, Enrique; Borlaf, Javier; León, José H.
1998-03-01
In this paper the complete geometrical setting of (lowest order) abelian T-duality is explored with the help of some new geometrical tools (the reduced formalism). In particular, all invariant polynomials (the integrands of the characteristic classes) can be explicitly computed for the dual model in terms of quantities pertaining to the original one and with the help of the canonical connection whose intrinsic characterization is given. Using our formalism the physically, and T-duality invariant, relevant result that top forms are zero when there is an isometry without fixed points is easily proved. © 1998
Orthogonal polynomials and random matrices
Deift, Percy
2000-01-01
This volume expands on a set of lectures held at the Courant Institute on Riemann-Hilbert problems, orthogonal polynomials, and random matrix theory. The goal of the course was to prove universality for a variety of statistical quantities arising in the theory of random matrix models. The central question was the following: Why do very general ensembles of random n {\\times} n matrices exhibit universal behavior as n {\\rightarrow} {\\infty}? The main ingredient in the proof is the steepest descent method for oscillatory Riemann-Hilbert problems.
Introduction to Real Orthogonal Polynomials
1992-06-01
uses Green’s functions. As motivation , consider the Dirichlet problem for the unit circle in the plane, which involves finding a harmonic function u(r...xv ; a, b ; q) - TO [q-N ab+’q ; q, xq b. Orthogoy RMotion O0 (bq :q)x p.(q* ; a, b ; q) pg(q’ ; a, b ; q) (q "q), (aq)x (q ; q), (I -abq) (bq ; q... motivation and justi- fication for continued study of the intrinsic structure of orthogonal polynomials. 99 LIST OF REFERENCES 1. Deyer, W. M., ed., CRC
Energy Technology Data Exchange (ETDEWEB)
Shim, Hee-Jin [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Ha, Min-Su, E-mail: msha12@nfri.re.kr [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Kim, Sa-Woong; Jung, Hun-Chea [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Kim, Duck-Hoi [ITER Organization, Route de Vinon sur Verdon - CS 90046, 13067 Sant Paul Lez Durance (France)
2016-11-01
Highlights: • The procedure of structural integrity and fatigue assessment was described. • Case studies were performed according to both SDC-IC and ASME Sec. • III codes The conservatism of the ASME code was demonstrated. • The study only covers the specifically comparable case about fatigue usage factor. - Abstract: The ITER blanket Shield Block is a bulk structure to absorb radiation and to provide thermal shielding to vacuum vessel and external vessel components, therefore the most significant load for Shield Block is the thermal load. In the previous study, the thermo-mechanical analysis has been performed under the inductive operation as representative loading condition. And the fatigue evaluations were conducted to assure structural integrity for Shield Block according to Structural Design Criteria for In-vessel Components (SDC-IC) which provided by ITER Organization (IO) based on the code of RCC-MR. Generally, ASME code (especially, B&PV Sec. III) is widely applied for design of nuclear components, and is usually well known as more conservative than other specific codes. For the view point of the fatigue assessment, ASME code is very conservative compared with SDC-IC in terms of the reflected K{sub e} factor, design fatigue curve and other factors. Therefore, an accurate fatigue assessment comparison is needed to measure of conservatism. The purpose of this study is to provide the fatigue usage comparison resulting from the specified operating conditions shall be evaluated for Shield Block based on both SDC-IC and ASME code, and to discuss the conservatism of the results.
International Nuclear Information System (INIS)
Shim, Hee-Jin; Ha, Min-Su; Kim, Sa-Woong; Jung, Hun-Chea; Kim, Duck-Hoi
2016-01-01
Highlights: • The procedure of structural integrity and fatigue assessment was described. • Case studies were performed according to both SDC-IC and ASME Sec. • III codes The conservatism of the ASME code was demonstrated. • The study only covers the specifically comparable case about fatigue usage factor. - Abstract: The ITER blanket Shield Block is a bulk structure to absorb radiation and to provide thermal shielding to vacuum vessel and external vessel components, therefore the most significant load for Shield Block is the thermal load. In the previous study, the thermo-mechanical analysis has been performed under the inductive operation as representative loading condition. And the fatigue evaluations were conducted to assure structural integrity for Shield Block according to Structural Design Criteria for In-vessel Components (SDC-IC) which provided by ITER Organization (IO) based on the code of RCC-MR. Generally, ASME code (especially, B&PV Sec. III) is widely applied for design of nuclear components, and is usually well known as more conservative than other specific codes. For the view point of the fatigue assessment, ASME code is very conservative compared with SDC-IC in terms of the reflected K_e factor, design fatigue curve and other factors. Therefore, an accurate fatigue assessment comparison is needed to measure of conservatism. The purpose of this study is to provide the fatigue usage comparison resulting from the specified operating conditions shall be evaluated for Shield Block based on both SDC-IC and ASME code, and to discuss the conservatism of the results.
Directory of Open Access Journals (Sweden)
Yariv I
2016-10-01
Full Text Available Inbar Yariv,1 Menashe Haddad,2,3 Hamootal Duadi,1 Menachem Motiei,1 Dror Fixler1 1Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, Israel; 2Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel; 3Mayanei Hayeshua Medical Center, Benei Brak, Israel Abstract: Physiological substances pose a challenge for researchers since their optical properties change constantly according to their physiological state. Examination of those substances noninvasively can be achieved by different optical methods with high sensitivity. Our research suggests the application of a novel noninvasive nanophotonics technique, ie, iterative multi-plane optical property extraction (IMOPE based on reflectance measurements, for tissue viability examination and gold nanorods (GNRs and blood flow detection. The IMOPE model combines an experimental setup designed for recording light intensity images with the multi-plane iterative Gerchberg-Saxton algorithm for reconstructing the reemitted light phase and calculating its standard deviation (STD. Changes in tissue composition affect its optical properties which results in changes in the light phase that can be measured by its STD. We have demonstrated this new concept of correlating the light phase STD and the optical properties of a substance, using transmission measurements only. This paper presents, for the first time, reflectance based IMOPE tissue viability examination, producing a decrease in the computed STD for older tissues, as well as investigating their organic material absorption capability. Finally, differentiation of the femoral vein from adjacent tissues using GNRs and the detection of their presence within blood circulation and tissues are also presented with high sensitivity (better than computed tomography to low quantities of GNRs (<3 mg. Keywords: Gerchberg-Saxton, optical properties, gold nanorods, blood vessel, tissue viability
Real-root property of the spectral polynomial of the Treibich-Verdier potential and related problems
Chen, Zhijie; Kuo, Ting-Jung; Lin, Chang-Shou; Takemura, Kouichi
2018-04-01
We study the spectral polynomial of the Treibich-Verdier potential. Such spectral polynomial, which is a generalization of the classical Lamé polynomial, plays fundamental roles in both the finite-gap theory and the ODE theory of Heun's equation. In this paper, we prove that all the roots of such spectral polynomial are real and distinct under some assumptions. The proof uses the classical concept of Sturm sequence and isomonodromic theories. We also prove an analogous result for a polynomial associated with a generalized Lamé equation, where we apply a new approach based on the viewpoint of the monodromy data.
Simulation of aspheric tolerance with polynomial fitting
Li, Jing; Cen, Zhaofeng; Li, Xiaotong
2018-01-01
The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.
Positive trigonometric polynomials and signal processing applications
Dumitrescu, Bogdan
2017-01-01
This revised edition is made up of two parts: theory and applications. Though many of the fundamental results are still valid and used, new and revised material is woven throughout the text. As with the original book, the theory of sum-of-squares trigonometric polynomials is presented unitarily based on the concept of Gram matrix (extended to Gram pair or Gram set). The programming environment has also evolved, and the books examples are changed accordingly. The applications section is organized as a collection of related problems that use systematically the theoretical results. All the problems are brought to a semi-definite programming form, ready to be solved with algorithms freely available, like those from the libraries SeDuMi, CVX and Pos3Poly. A new chapter discusses applications in super-resolution theory, where Bounded Real Lemma for trigonometric polynomials is an important tool. This revision is written to be more appealing and easier to use for new readers. < Features updated information on LMI...
ITER physics design guidelines: 1989
International Nuclear Information System (INIS)
Uckan, N.A.
1990-01-01
The physics basis for ITER has been developed from an assessment of the results of the last twenty-five years of tokamak research and from detailed analysis of important physics issues specifically for the ITER design. This assessment has been carried out with direct participation of members of the experimental teams of each of the major tokamaks in the world fusion program through participation in ITER workshops, contributions to the ITER Physics R and D Program, and by direct contacts between the ITER team and the cognizant experimentalists. Extrapolations to the present data base, where needed, are made in the most cautious way consistent with engineering constraints and performance goals of the ITER. In cases where a working assumptions had to be introduced, which is insufficiently supported by the present data base, is explicitly stated. While a strong emphasis has been placed on the physics credibility of the design, the guidelines also take into account that ITER should be designed to be able to take advantage of potential improvements in tokamak physics that may occur before and during the operation of ITER. (author). 33 refs
ITER council proceedings: 2001
International Nuclear Information System (INIS)
2001-01-01
Continuing the ITER EDA, two further ITER Council Meetings were held since the publication of ITER EDA documentation series no, 20, namely the ITER Council Meeting on 27-28 February 2001 in Toronto, and the ITER Council Meeting on 18-19 July in Vienna. That Meeting was the last one during the ITER EDA. This volume contains records of these Meetings, including: Records of decisions; List of attendees; ITER EDA status report; ITER EDA technical activities report; MAC report and advice; Final report of ITER EDA; and Press release
A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories
Narkawicz, Anthony; Munoz, Cesar
2015-01-01
In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.
A companion matrix for 2-D polynomials
International Nuclear Information System (INIS)
Boudellioua, M.S.
1995-08-01
In this paper, a matrix form analogous to the companion matrix which is often encountered in the theory of one dimensional (1-D) linear systems is suggested for a class of polynomials in two indeterminates and real coefficients, here referred to as two dimensional (2-D) polynomials. These polynomials arise in the context of 2-D linear systems theory. Necessary and sufficient conditions are also presented under which a matrix is equivalent to this companion form. (author). 6 refs
On polynomial solutions of the Heun equation
International Nuclear Information System (INIS)
Gurappa, N; Panigrahi, Prasanta K
2004-01-01
By making use of a recently developed method to solve linear differential equations of arbitrary order, we find a wide class of polynomial solutions to the Heun equation. We construct the series solution to the Heun equation before identifying the polynomial solutions. The Heun equation extended by the addition of a term, -σ/x, is also amenable for polynomial solutions. (letter to the editor)
A new Arnoldi approach for polynomial eigenproblems
Energy Technology Data Exchange (ETDEWEB)
Raeven, F.A.
1996-12-31
In this paper we introduce a new generalization of the method of Arnoldi for matrix polynomials. The new approach is compared with the approach of rewriting the polynomial problem into a linear eigenproblem and applying the standard method of Arnoldi to the linearised problem. The algorithm that can be applied directly to the polynomial eigenproblem turns out to be more efficient, both in storage and in computation.
Weierstrass method for quaternionic polynomial root-finding
Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana
2018-01-01
Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.
Energy Technology Data Exchange (ETDEWEB)
Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)
2017-03-15
To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)
International Nuclear Information System (INIS)
Huguet, M.
2003-01-01
The ITER magnets are long-lead time items and the preparation of their construction is the subject of a major and coordinated effort of the ITER International Team and Participant Teams. The results of the ITER model coil programme constitute the basis and the main source of data for the preparation of the technical specifications for the procurement of the ITER magnets. A review of the salient results of the ITER model coil programme is given and the significance of these results for the preparation of full size industrial production is explained. The model coil programme has confirmed the validity of the design and the manufacturer's ability to produce the coils with the required quality level. The programme has also allowed the optimisation of the conductor design and the identification of further development which would lead to cost reductions of the toroidal field coil case. (author)
Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.
2014-01-01
Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359
Directory of Open Access Journals (Sweden)
Alessandro Danielis
2015-01-01
Full Text Available The processing of intensity data from terrestrial laser scanners has attracted considerable attention in recent years. Accurate calibrated intensity could give added value for laser scanning campaigns, for example, in producing faithful 3D colour models of real targets and classifying easier and more reliable automatic tools. In cultural heritage area, the purely geometric information provided by the vast majority of currently available scanners is not enough for most applications, where indeed accurate colorimetric data is needed. This paper presents a remote calibration method for self-registered RGB colour data provided by a 3D tristimulus laser scanner prototype. Such distinguishing colour information opens new scenarios and problems for remote colorimetry. Using piecewise cubic Hermite polynomials, a quadratic model with nonpolynomial terms for reducing inaccuracies occurring in remote colour measurement is implemented. Colorimetric data recorded by the prototype on certified diffusive targets is processed for generating a remote Lambertian model used for assessing the accuracy of the proposed algorithm. Results concerning laser scanner digitizations of artworks are reported to confirm the effectiveness of the method.
Fermionic formula for double Kostka polynomials
Liu, Shiyuan
2016-01-01
The $X=M$ conjecture asserts that the $1D$ sum and the fermionic formula coincide up to some constant power. In the case of type $A,$ both the $1D$ sum and the fermionic formula are closely related to Kostka polynomials. Double Kostka polynomials $K_{\\Bla,\\Bmu}(t),$ indexed by two double partitions $\\Bla,\\Bmu,$ are polynomials in $t$ introduced as a generalization of Kostka polynomials. In the present paper, we consider $K_{\\Bla,\\Bmu}(t)$ in the special case where $\\Bmu=(-,\\mu'').$ We formula...
Polynomial sequences generated by infinite Hessenberg matrices
Directory of Open Access Journals (Sweden)
Verde-Star Luis
2017-01-01
Full Text Available We show that an infinite lower Hessenberg matrix generates polynomial sequences that correspond to the rows of infinite lower triangular invertible matrices. Orthogonal polynomial sequences are obtained when the Hessenberg matrix is tridiagonal. We study properties of the polynomial sequences and their corresponding matrices which are related to recurrence relations, companion matrices, matrix similarity, construction algorithms, and generating functions. When the Hessenberg matrix is also Toeplitz the polynomial sequences turn out to be of interpolatory type and we obtain additional results. For example, we show that every nonderogative finite square matrix is similar to a unique Toeplitz-Hessenberg matrix.
Energy Technology Data Exchange (ETDEWEB)
Price, Ryan G. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 and Wayne State University School of Medicine, Detroit, Michigan 48201 (United States); Vance, Sean; Cattaneo, Richard; Elshaikh, Mohamed A.; Chetty, Indrin J.; Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States); Schultz, Lonni [Department of Public Health Sciences, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)
2014-08-15
Purpose: Iterative reconstruction (IR) reduces noise, thereby allowing dose reduction in computed tomography (CT) while maintaining comparable image quality to filtered back-projection (FBP). This study sought to characterize image quality metrics, delineation, dosimetric assessment, and other aspects necessary to integrate IR into treatment planning. Methods: CT images (Brilliance Big Bore v3.6, Philips Healthcare) were acquired of several phantoms using 120 kVp and 25–800 mAs. IR was applied at levels corresponding to noise reduction of 0.89–0.55 with respect to FBP. Noise power spectrum (NPS) analysis was used to characterize noise magnitude and texture. CT to electron density (CT-ED) curves were generated over all IR levels. Uniformity as well as spatial and low contrast resolution were quantified using a CATPHAN phantom. Task specific modulation transfer functions (MTF{sub task}) were developed to characterize spatial frequency across objects of varied contrast. A prospective dose reduction study was conducted for 14 patients undergoing interfraction CT scans for high-dose rate brachytherapy. Three physicians performed image quality assessment using a six-point grading scale between the normal-dose FBP (reference), low-dose FBP, and low-dose IR scans for the following metrics: image noise, detectability of the vaginal cuff/bladder interface, spatial resolution, texture, segmentation confidence, and overall image quality. Contouring differences between FBP and IR were quantified for the bladder and rectum via overlap indices (OI) and Dice similarity coefficients (DSC). Line profile and region of interest analyses quantified noise and boundary changes. For two subjects, the impact of IR on external beam dose calculation was assessed via gamma analysis and changes in digitally reconstructed radiographs (DRRs) were quantified. Results: NPS showed large reduction in noise magnitude (50%), and a slight spatial frequency shift (∼0.1 mm{sup −1}) with
Global sensitivity analysis using sparse grid interpolation and polynomial chaos
International Nuclear Information System (INIS)
Buzzard, Gregery T.
2012-01-01
Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.
Nobile, Fabio
2015-01-01
the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial
The computation of bond percolation critical polynomials by the deletion–contraction algorithm
International Nuclear Information System (INIS)
Scullard, Christian R
2012-01-01
Although every exactly known bond percolation critical threshold is the root in [0,1] of a lattice-dependent polynomial, it has recently been shown that the notion of a critical polynomial can be extended to any periodic lattice. The polynomial is computed on a finite subgraph, called the base, of an infinite lattice. For any problem with exactly known solution, the prediction of the bond threshold is always correct for any base containing an arbitrary number of unit cells. For unsolved problems, the polynomial is referred to as the generalized critical polynomial and provides an approximation that becomes more accurate with increasing number of bonds in the base, appearing to approach the exact answer. The polynomials are computed using the deletion–contraction algorithm, which quickly becomes intractable by hand for more than about 18 bonds. Here, I present generalized critical polynomials calculated with a computer program for bases of up to 36 bonds for all the unsolved Archimedean lattices, except the kagome lattice, which was considered in an earlier work. The polynomial estimates are generally within 10 −5 –10 −7 of the numerical values, but the prediction for the (4,8 2 ) lattice, though not exact, is not ruled out by simulations. (paper)
Directory of Open Access Journals (Sweden)
Hamed Kharrati
2012-01-01
Full Text Available This study presents an improved model and controller for nonlinear plants using polynomial fuzzy model-based (FMB systems. To minimize mismatch between the polynomial fuzzy model and nonlinear plant, the suitable parameters of membership functions are determined in a systematic way. Defining an appropriate fitness function and utilizing Taylor series expansion, a genetic algorithm (GA is used to form the shape of membership functions in polynomial forms, which are afterwards used in fuzzy modeling. To validate the model, a controller based on proposed polynomial fuzzy systems is designed and then applied to both original nonlinear plant and fuzzy model for comparison. Additionally, stability analysis for the proposed polynomial FMB control system is investigated employing Lyapunov theory and a sum of squares (SOS approach. Moreover, the form of the membership functions is considered in stability analysis. The SOS-based stability conditions are attained using SOSTOOLS. Simulation results are also given to demonstrate the effectiveness of the proposed method.
Tsai, Shun Hung; Chen, Yu-An; Chen, Yu-Wen; Lo, Ji-Chang; Lam, Hak-Keung
2017-01-01
A novel stabilization problem for T-S polynomial fuzzy system with time-delay is investigated in this paper. Firstly, a polynomial fuzzy controller for T-S polynomial fuzzy system with time-delay is proposed. In addition, based on polynomial Lyapunov-Krasovskii function and the developed polynomial slack variable matrices, a novel stabilization condition for T-S polynomial fuzzy system with time-delay is presented in terms of sum-of-square (SOS) form. Lastly, nonlinear system with time-delay ...
A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection
Takano, Y.; Sotirov, R.
2010-01-01
We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on
Learning Mixtures of Polynomials of Conditional Densities from Data
DEFF Research Database (Denmark)
L. López-Cruz, Pedro; Nielsen, Thomas Dyhre; Bielza, Concha
2013-01-01
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian networks with continuous and discrete variables. We propose two methods for learning MoP ap- proximations of conditional densities from data. Both approaches are based on learning MoP approximatio...
QCD analysis of structure functions in terms of Jacobi polynomials
International Nuclear Information System (INIS)
Krivokhizhin, V.G.; Kurlovich, S.P.; Savin, I.A.; Sidorov, A.V.; Skachkov, N.B.; Sanadze, V.V.
1987-01-01
A new method of QCD-analysis of singlet and nonsinglet structure functions based on their expansion in orthogonal Jacobi polynomials is proposed. An accuracy of the method is studied and its application is demonstrated using the structure function F 2 (x,Q 2 ) obtained by the EMC Collaboration from measurements with an iron target. (orig.)
Polynomial constitutive model for shape memory and pseudo elasticity
International Nuclear Information System (INIS)
Savi, M.A.; Kouzak, Z.
1995-01-01
This paper reports an one-dimensional phenomenological constitutive model for shape memory and pseudo elasticity using a polynomial expression for the free energy which is based on the classical Devonshire theory. This study identifies the main characteristics of the classical theory and introduces a simple modification to obtain better results. (author). 9 refs., 6 figs
Tsallis p, q-deformed Touchard polynomials and Stirling numbers
Herscovici, O.; Mansour, T.
2017-01-01
In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.
Szegö Kernels and Asymptotic Expansions for Legendre Polynomials
Directory of Open Access Journals (Sweden)
Roberto Paoletti
2017-01-01
Full Text Available We present a geometric approach to the asymptotics of the Legendre polynomials Pk,n+1, based on the Szegö kernel of the Fermat quadric hypersurface, leading to complete asymptotic expansions holding on expanding subintervals of [-1,1].
A polynomial optimization approach to constant rebalanced portfolio selection
Takano, Y.; Sotirov, R.
2012-01-01
We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on
Zhang, Shu; Taft, Cyrus W; Bentsman, Joseph; Hussey, Aaron; Petrus, Bryan
2012-09-01
Tuning a complex multi-loop PID based control system requires considerable experience. In today's power industry the number of available qualified tuners is dwindling and there is a great need for better tuning tools to maintain and improve the performance of complex multivariable processes. Multi-loop PID tuning is the procedure for the online tuning of a cluster of PID controllers operating in a closed loop with a multivariable process. This paper presents the first application of the simultaneous tuning technique to the multi-input-multi-output (MIMO) PID based nonlinear controller in the power plant control context, with the closed-loop system consisting of a MIMO nonlinear boiler/turbine model and a nonlinear cluster of six PID-type controllers. Although simplified, the dynamics and cross-coupling of the process and the PID cluster are similar to those used in a real power plant. The particular technique selected, iterative feedback tuning (IFT), utilizes the linearized version of the PID cluster for signal conditioning, but the data collection and tuning is carried out on the full nonlinear closed-loop system. Based on the figure of merit for the control system performance, the IFT is shown to deliver performance favorably comparable to that attained through the empirical tuning carried out by an experienced control engineer. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Majeed, Muhammad Usman
2017-01-01
the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time
Iterative observer based method for source localization problem for Poisson equation in 3D
Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem
2017-01-01
A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data
Polynomial solutions of nonlinear integral equations
International Nuclear Information System (INIS)
Dominici, Diego
2009-01-01
We analyze the polynomial solutions of a nonlinear integral equation, generalizing the work of Bender and Ben-Naim (2007 J. Phys. A: Math. Theor. 40 F9, 2008 J. Nonlinear Math. Phys. 15 (Suppl. 3) 73). We show that, in some cases, an orthogonal solution exists and we give its general form in terms of kernel polynomials
Sibling curves of quadratic polynomials | Wiggins | Quaestiones ...
African Journals Online (AJOL)
Sibling curves were demonstrated in [1, 2] as a novel way to visualize the zeroes of real valued functions. In [3] it was shown that a polynomial of degree n has n sibling curves. This paper focuses on the algebraic and geometric properites of the sibling curves of real and complex quadratic polynomials. Key words: Quadratic ...
Topological string partition functions as polynomials
International Nuclear Information System (INIS)
Yamaguchi, Satoshi; Yau Shingtung
2004-01-01
We investigate the structure of the higher genus topological string amplitudes on the quintic hypersurface. It is shown that the partition functions of the higher genus than one can be expressed as polynomials of five generators. We also compute the explicit polynomial forms of the partition functions for genus 2, 3, and 4. Moreover, some coefficients are written down for all genus. (author)
Polynomial solutions of nonlinear integral equations
Energy Technology Data Exchange (ETDEWEB)
Dominici, Diego [Department of Mathematics, State University of New York at New Paltz, 1 Hawk Dr. Suite 9, New Paltz, NY 12561-2443 (United States)], E-mail: dominicd@newpaltz.edu
2009-05-22
We analyze the polynomial solutions of a nonlinear integral equation, generalizing the work of Bender and Ben-Naim (2007 J. Phys. A: Math. Theor. 40 F9, 2008 J. Nonlinear Math. Phys. 15 (Suppl. 3) 73). We show that, in some cases, an orthogonal solution exists and we give its general form in terms of kernel polynomials.
A generalization of the Bernoulli polynomials
Directory of Open Access Journals (Sweden)
Pierpaolo Natalini
2003-01-01
Full Text Available A generalization of the Bernoulli polynomials and, consequently, of the Bernoulli numbers, is defined starting from suitable generating functions. Furthermore, the differential equations of these new classes of polynomials are derived by means of the factorization method introduced by Infeld and Hull (1951.
The Bessel polynomials and their differential operators
International Nuclear Information System (INIS)
Onyango Otieno, V.P.
1987-10-01
Differential operators associated with the ordinary and the generalized Bessel polynomials are defined. In each case the commutator bracket is constructed and shows that the differential operators associated with the Bessel polynomials and their generalized form are not commutative. Some applications of these operators to linear differential equations are also discussed. (author). 4 refs
Exceptional polynomials and SUSY quantum mechanics
Indian Academy of Sciences (India)
Abstract. We show that for the quantum mechanical problem which admit classical Laguerre/. Jacobi polynomials as solutions for the Schrödinger equations (SE), will also admit exceptional. Laguerre/Jacobi polynomials as solutions having the same eigenvalues but with the ground state missing after a modification of the ...
Connections between the matching and chromatic polynomials
Directory of Open Access Journals (Sweden)
E. J. Farrell
1992-01-01
Full Text Available The main results established are (i a connection between the matching and chromatic polynomials and (ii a formula for the matching polynomial of a general complement of a subgraph of a graph. Some deductions on matching and chromatic equivalence and uniqueness are made.
Laguerre polynomials by a harmonic oscillator
Baykal, Melek; Baykal, Ahmet
2014-09-01
The study of an isotropic harmonic oscillator, using the factorization method given in Ohanian's textbook on quantum mechanics, is refined and some collateral extensions of the method related to the ladder operators and the associated Laguerre polynomials are presented. In particular, some analytical properties of the associated Laguerre polynomials are derived using the ladder operators.
Laguerre polynomials by a harmonic oscillator
International Nuclear Information System (INIS)
Baykal, Melek; Baykal, Ahmet
2014-01-01
The study of an isotropic harmonic oscillator, using the factorization method given in Ohanian's textbook on quantum mechanics, is refined and some collateral extensions of the method related to the ladder operators and the associated Laguerre polynomials are presented. In particular, some analytical properties of the associated Laguerre polynomials are derived using the ladder operators. (paper)