Sample records for unmixed bivariate polynomial

  1. On bivariate geometric distribution

    K. Jayakumar


    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  2. Orthogonal polynomials

    Freud, Géza


    Orthogonal Polynomials contains an up-to-date survey of the general theory of orthogonal polynomials. It deals with the problem of polynomials and reveals that the sequence of these polynomials forms an orthogonal system with respect to a non-negative m-distribution defined on the real numerical axis. Comprised of five chapters, the book begins with the fundamental properties of orthogonal polynomials. After discussing the momentum problem, it then explains the quadrature procedure, the convergence theory, and G. Szegő's theory. This book is useful for those who intend to use it as referenc

  3. Sediment unmixing using detrital geochronology

    Sharman, Glenn R.; Johnstone, Samuel A.


    Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the effect of environmental forcing (e.g., tectonism, climate) on the Earth's surface. Here, we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First, we summarize 'top-down' mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions ('parents') that characterize a derived sample or set of samples ('daughters'). Second, we propose the use of 'bottom-up' methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable that is well mixed over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has the potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.

  4. Ordinal bivariate inequality

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlationincreasing switches and/or median......-preserving spreads. For the canonical 2x2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  5. Ordinal Bivariate Inequality

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave


    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and....../or median-preserving spreads. For the canonical 2 × 2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  6. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.


    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  7. Hybrid Spectral Unmixing: Using Artificial Neural Networks for Linear/Non-Linear Switching

    Asmau M. Ahmed


    Full Text Available Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1 The mixing process should occur at macroscopic level and (2 Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model. Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms.

  8. Spectral unmixing using the concept of pure variables

    Kucheryavskiy, Sergey V.


    This comprehensive book presents an interdisciplinary approach to demonstrate how and why data analysis, signal processing, and chemometrics are essential to resolving the spectral unmixing problem.......This comprehensive book presents an interdisciplinary approach to demonstrate how and why data analysis, signal processing, and chemometrics are essential to resolving the spectral unmixing problem....

  9. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Chang Li


    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  10. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li


    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  11. On the Use of FOSS4G in Land Cover Fraction Estimation with Unmixing Algorithms

    Kumar, U.; Milesi, C.; Raja, K.; Ganguly, S.; Wang, W.; Zhang, G.; Nemani, R. R.


    The popularity and usage of FOSS4G (FOSS for Geoinformatics) has increased drastically in the last two decades with increasing benefits that facilitate spatial data analysis, image processing, graphics and map production, spatial modeling and visualization. The objective of this paper is to use FOSS4G to implement and perform a quantitative analysis of three different unmixing algorithms: Constraint Least-Square (CLS), Unconstraint Least-Square, and Orthogonal Subspace Projection to estimate land cover (LC) fraction estimates from RS data. The LC fractions obtained by unmixing of mixed pixels represent mixture of more than one class per pixel rendering more accurate LC abundance estimates. The algorithms were implemented in C++ programming language with OpenCV package ( and boost C++ libraries ( in the NASA Earth Exchange at the NASA Advanced Supercomputing Facility. GRASS GIS was used for visualization of results and statistical analysis was carried in R in a Linux system environment. A set of global endmembers for substrate, vegetation and dark objects were used to unmix the data using the three algorithms and were compared with Singular Value decomposition unmixed outputs available in ENVI image processing software. First, computer simulated data of different signal to noise ratio were used to evaluate the algorithms. The second set of experiments was carried out in an agricultural set-up with a spectrally diverse collection of 11 Landsat-5 scenes (acquired in 2008) for an agricultural setup in Frenso, California and the ground data were collected on those specific dates when the satellite passed through the site. Finally, in the third set of experiments, a pair of coincident clear sky Landsat and World View 2 data for an urbanized area of San Francisco were used to assess the algorithm. Validation of the results using descriptive statistics, correlation coefficient (cc), RMSE, boxplot and bivariate distribution function

  12. Bivariate value-at-risk

    Giuseppe Arbia


    Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.

  13. Spectral Unmixing Analysis of Time Series Landsat 8 Images

    Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.


    Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.

  14. Irreducible multivariate polynomials obtained from polynomials in ...

    Hall, 1409 W. Green Street, Urbana, IL 61801, USA. E-mail: Nicolae. ... Theorem A. If we write an irreducible polynomial f ∈ K[X] as a sum of polynomials a0,..., an ..... This shows us that deg ai = (n − i) deg f2 for each i = 0,..., n, so min k>0.

  15. Branched polynomial covering maps

    Hansen, Vagn Lundsgaard


    A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere....

  16. Better polynomials for GNFS

    Bai , Shi; Bouvier , Cyril; Kruppa , Alexander; Zimmermann , Paul


    International audience; The general number field sieve (GNFS) is the most efficient algo-rithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the selected polynomials can be modelled in terms of size and root properties. We propose a new kind of polynomials for GNFS: with a new degree of freedom, we further improve the size property. We demonstrate the efficiency of our algorithm by exhibiting a better polynomial tha...

  17. Unmixing hyperspectral images using Markov random fields

    Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves


    This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.

  18. Branched polynomial covering maps

    Hansen, Vagn Lundsgaard


    A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch ...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere. (C) 2001 Elsevier Science B.V. All rights reserved.......A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...

  19. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Su Xu


    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  20. Automated Endmember Selection for Nonlinear Unmixing of Lunar Spectra

    Felder, M. P.; Grumpe, A.; Wöhler, C.; Mall, U.


    An important aspect of the analysis of remotely sensed lunar reflectance spectra is their decomposition into intimately mixed constituents. While some methods rely on unmixing of the observed reflectance spectra [1] or on the identification of minerals by extracting the depths and positions of mineral-specific absorption troughs [2, 3], these approaches do not allow for an automated selection of the (a priori unknown) endmembers from a large set of possible constituents. In this study, a non-linear spectral unmixing approach combined with an automated endmember selection scheme is proposed. This method is applied to reflectance spectra of the SIR-2 point spectrometer [4] carried by the Chandrayaan-1 spacecraft.

  1. Weierstrass polynomials for links

    Hansen, Vagn Lundsgaard


    There is a natural way of identifying links in3-space with polynomial covering spaces over thecircle. Thereby any link in 3-space can be definedby a Weierstrass polynomial over the circle. Theequivalence relation for covering spaces over thecircle is, however, completely different from...

  2. Nonnegativity of uncertain polynomials

    Šiljak Dragoslav D.


    Full Text Available The purpose of this paper is to derive tests for robust nonnegativity of scalar and matrix polynomials, which are algebraic, recursive, and can be completed in finite number of steps. Polytopic families of polynomials are considered with various characterizations of parameter uncertainty including affine, multilinear, and polynomic structures. The zero exclusion condition for polynomial positivity is also proposed for general parameter dependencies. By reformulating the robust stability problem of complex polynomials as positivity of real polynomials, we obtain new sufficient conditions for robust stability involving multilinear structures, which can be tested using only real arithmetic. The obtained results are applied to robust matrix factorization, strict positive realness, and absolute stability of multivariable systems involving parameter dependent transfer function matrices.

  3. Multivariate Local Polynomial Regression with Application to Shenzhen Component Index

    Liyun Su


    Full Text Available This study attempts to characterize and predict stock index series in Shenzhen stock market using the concepts of multivariate local polynomial regression. Based on nonlinearity and chaos of the stock index time series, multivariate local polynomial prediction methods and univariate local polynomial prediction method, all of which use the concept of phase space reconstruction according to Takens' Theorem, are considered. To fit the stock index series, the single series changes into bivariate series. To evaluate the results, the multivariate predictor for bivariate time series based on multivariate local polynomial model is compared with univariate predictor with the same Shenzhen stock index data. The numerical results obtained by Shenzhen component index show that the prediction mean squared error of the multivariate predictor is much smaller than the univariate one and is much better than the existed three methods. Even if the last half of the training data are used in the multivariate predictor, the prediction mean squared error is smaller than the univariate predictor. Multivariate local polynomial prediction model for nonsingle time series is a useful tool for stock market price prediction.

  4. Polynomial Heisenberg algebras

    Carballo, Juan M; C, David J Fernandez; Negro, Javier; Nieto, Luis M


    Polynomial deformations of the Heisenberg algebra are studied in detail. Some of their natural realizations are given by the higher order susy partners (and not only by those of first order, as is already known) of the harmonic oscillator for even-order polynomials. Here, it is shown that the susy partners of the radial oscillator play a similar role when the order of the polynomial is odd. Moreover, it will be proved that the general systems ruled by such kinds of algebras, in the quadratic and cubic cases, involve Painleve transcendents of types IV and V, respectively

  5. Generalizations of orthogonal polynomials

    Bultheel, A.; Cuyt, A.; van Assche, W.; van Barel, M.; Verdonk, B.


    We give a survey of recent generalizations of orthogonal polynomials. That includes multidimensional (matrix and vector orthogonal polynomials) and multivariate versions, multipole (orthogonal rational functions) variants, and extensions of the orthogonality conditions (multiple orthogonality). Most of these generalizations are inspired by the applications in which they are applied. We also give a glimpse of these applications, which are usually generalizations of applications where classical orthogonal polynomials also play a fundamental role: moment problems, numerical quadrature, rational approximation, linear algebra, recurrence relations, and random matrices.

  6. Superiority of legendre polynomials to Chebyshev polynomial in ...

    In this paper, we proved the superiority of Legendre polynomial to Chebyshev polynomial in solving first order ordinary differential equation with rational coefficient. We generated shifted polynomial of Chebyshev, Legendre and Canonical polynomials which deal with solving differential equation by first choosing Chebyshev ...

  7. Extended biorthogonal matrix polynomials

    Ayman Shehata


    Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.

  8. On Symmetric Polynomials

    Golden, Ryan; Cho, Ilwoo


    In this paper, we study structure theorems of algebras of symmetric functions. Based on a certain relation on elementary symmetric polynomials generating such algebras, we consider perturbation in the algebras. In particular, we understand generators of the algebras as perturbations. From such perturbations, define injective maps on generators, which induce algebra-monomorphisms (or embeddings) on the algebras. They provide inductive structure theorems on algebras of symmetric polynomials. As...

  9. Spectral unmixing of hyperspectral data to map bauxite deposits

    Shanmugam, Sanjeevi; Abhishekh, P. V.


    This paper presents a study about the potential of remote sensing in bauxite exploration in the Kolli hills of Tamilnadu state, southern India. ASTER image (acquired in the VNIR and SWIR regions) has been used in conjunction with SRTM - DEM in this study. A new approach of spectral unmixing of ASTER image data delineated areas rich in alumina. Various geological and geomorphological parameters that control bauxite formation were also derived from the ASTER image. All these information, when integrated, showed that there are 16 cappings (including the existing mines) that satisfy most of the conditions favouring bauxitization in the Kolli Hills. The study concludes that spectral unmixing of hyperspectral satellite data in the VNIR and SWIR regions may be combined with the terrain parameters to get accurate information about bauxite deposits, including their quality.

  10. Chromatic polynomials for simplicial complexes

    Møller, Jesper Michael; Nord, Gesche


    In this note we consider s s -chromatic polynomials for finite simplicial complexes. When s=1 s=1 , the 1 1 -chromatic polynomial is just the usual graph chromatic polynomial of the 1 1 -skeleton. In general, the s s -chromatic polynomial depends on the s s -skeleton and its value at r...

  11. The Bivariate (Complex) Fibonacci and Lucas Polynomials: An Historical Investigation with the Maple's Help

    Alves, Francisco Regis Vieira; Catarino, Paula Maria Machado Cruz


    The current research around the Fibonacci's and Lucas' sequence evidences the scientific vigor of both mathematical models that continue to inspire and provide numerous specializations and generalizations, especially from the sixties. One of the current of research and investigations around the Generalized Sequence of Lucas, involves it's…

  12. Colouring and knot polynomials

    Welsh, D.J.A.


    These lectures will attempt to explain a connection between the recent advances in knot theory using the Jones and related knot polynomials with classical problems in combinatorics and statistical mechanics. The difficulty of some of these problems will be analysed in the context of their computational complexity. In particular we shall discuss colourings and groups valued flows in graphs, knots and the Jones and Kauffman polynomials, the Ising, Potts and percolation problems of statistical physics, computational complexity of the above problems. (author). 20 refs, 9 figs

  13. Additive and polynomial representations

    Krantz, David H; Suppes, Patrick


    Additive and Polynomial Representations deals with major representation theorems in which the qualitative structure is reflected as some polynomial function of one or more numerical functions defined on the basic entities. Examples are additive expressions of a single measure (such as the probability of disjoint events being the sum of their probabilities), and additive expressions of two measures (such as the logarithm of momentum being the sum of log mass and log velocity terms). The book describes the three basic procedures of fundamental measurement as the mathematical pivot, as the utiliz

  14. Terahertz spectral unmixing based method for identifying gastric cancer

    Cao, Yuqi; Huang, Pingjie; Li, Xian; Ge, Weiting; Hou, Dibo; Zhang, Guangxin


    At present, many researchers are exploring biological tissue inspection using terahertz time-domain spectroscopy (THz-TDS) techniques. In this study, based on a modified hard modeling factor analysis method, terahertz spectral unmixing was applied to investigate the relationships between the absorption spectra in THz-TDS and certain biomarkers of gastric cancer in order to systematically identify gastric cancer. A probability distribution and box plot were used to extract the distinctive peaks that indicate carcinogenesis, and the corresponding weight distributions were used to discriminate the tissue types. The results of this work indicate that terahertz techniques have the potential to detect different levels of cancer, including benign tumors and polyps.

  15. On the Laurent polynomial rings

    Stefanescu, D.


    We describe some properties of the Laurent polynomial rings in a finite number of indeterminates over a commutative unitary ring. We study some subrings of the Laurent polynomial rings. We finally obtain two cancellation properties. (author)

  16. Computing the Alexander Polynomial Numerically

    Hansen, Mikael Sonne


    Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....

  17. Bivariate copula in fitting rainfall data

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui


    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  18. Reliability for some bivariate beta distributions

    Nadarajah Saralees


    Full Text Available In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate beta. The calculations involve the use of special functions.

  19. Reliability for some bivariate gamma distributions

    Nadarajah Saralees


    Full Text Available In the area of stress-strength models, there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate gamma. The calculations involve the use of special functions.

  20. Covariate analysis of bivariate survival data

    Bennett, L.E.


    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  1. Minimum volume simplicial enclosure for spectral unmixing of remotely sensed hyperspectral data

    Hendrix, E.M.T.; García, I.; Plaza, J.; Plaza, A.


    Spectral unmixing is an important task for remotely sensed hyperspectral data exploitation. Linear spectral unmixing relies on two main steps: 1) identification of pure spectral constituents (endmembers), and 2) end member abundance estimation in mixed pixels. One of the main problems concerning the

  2. Stochastic Estimation via Polynomial Chaos


    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  3. Polynomial optimization : Error analysis and applications

    Sun, Zhao


    Polynomial optimization is the problem of minimizing a polynomial function subject to polynomial inequality constraints. In this thesis we investigate several hierarchies of relaxations for polynomial optimization problems. Our main interest lies in understanding their performance, in particular how

  4. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto


    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  5. Manifold regularization for sparse unmixing of hyperspectral images.

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin


    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  6. Complex Polynomial Vector Fields

    Dias, Kealey

    vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...... of parameter spaces into structurally stable domains, and a description of the bifurcations. For this reason, the talk will focus on these questions for complex polynomial vector fields.......The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...

  7. Roots of the Chromatic Polynomial

    Perrett, Thomas

    The chromatic polynomial of a graph G is a univariate polynomial whose evaluation at any positive integer q enumerates the proper q-colourings of G. It was introduced in connection with the famous four colour theorem but has recently found other applications in the field of statistical physics...... extend Thomassen’s technique to the Tutte polynomial and as a consequence, deduce a density result for roots of the Tutte polynomial. This partially answers a conjecture of Jackson and Sokal. Finally, we refocus our attention on the chromatic polynomial and investigate the density of chromatic roots...

  8. Polynomials in algebraic analysis

    Multarzyński, Piotr


    The concept of polynomials in the sense of algebraic analysis, for a single right invertible linear operator, was introduced and studied originally by D. Przeworska-Rolewicz \\cite{DPR}. One of the elegant results corresponding with that notion is a purely algebraic version of the Taylor formula, being a generalization of its usual counterpart, well known for functions of one variable. In quantum calculus there are some specific discrete derivations analyzed, which are right invertible linear ...

  9. General Reducibility and Solvability of Polynomial Equations ...

    General Reducibility and Solvability of Polynomial Equations. ... Unlike quadratic, cubic, and quartic polynomials, the general quintic and higher degree polynomials cannot be solved algebraically in terms of finite number of additions, ... Galois Theory, Solving Polynomial Systems, Polynomial factorization, Polynomial Ring ...

  10. Spectral density regression for bivariate extremes

    Castro Camilo, Daniela


    We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods. © 2016 Springer-Verlag Berlin Heidelberg

  11. Bivariate Kumaraswamy Models via Modified FGM Copulas: Properties and Applications

    Indranil Ghosh


    Full Text Available A copula is a useful tool for constructing bivariate and/or multivariate distributions. In this article, we consider a new modified class of FGM (Farlie–Gumbel–Morgenstern bivariate copula for constructing several different bivariate Kumaraswamy type copulas and discuss their structural properties, including dependence structures. It is established that construction of bivariate distributions by this method allows for greater flexibility in the values of Spearman’s correlation coefficient, ρ and Kendall’s τ .

  12. Polynomial approximation on polytopes

    Totik, Vilmos


    Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.

  13. Polynomial intelligent states

    Milks, Matthew M; Guise, Hubert de


    The construction of su(2) intelligent states is simplified using a polynomial representation of su(2). The cornerstone of the new construction is the diagonalization of a 2 x 2 matrix. The method is sufficiently simple to be easily extended to su(3), where one is required to diagonalize a single 3 x 3 matrix. For two perfectly general su(3) operators, this diagonalization is technically possible but the procedure loses much of its simplicity owing to the algebraic form of the roots of a cubic equation. Simplified expressions can be obtained by specializing the choice of su(3) operators. This simpler construction will be discussed in detail

  14. Bivariate Rayleigh Distribution and its Properties

    Ahmad Saeed Akhter


    Full Text Available Rayleigh (1880 observed that the sea waves follow no law because of the complexities of the sea, but it has been seen that the probability distributions of wave heights, wave length, wave induce pitch, wave and heave motions of the ships follow the Rayleigh distribution. At present, several different quantities are in use for describing the state of the sea; for example, the mean height of the waves, the root mean square height, the height of the “significant waves” (the mean height of the highest one-third of all the waves the maximum height over a given interval of the time, and so on. At present, the ship building industry knows less than any other construction industry about the service conditions under which it must operate. Only small efforts have been made to establish the stresses and motions and to incorporate the result of such studies in to design. This is due to the complexity of the problem caused by the extensive variability of the sea and the corresponding response of the ships. Although the problem appears feasible, yet it is possible to predict service conditions for ships in an orderly and relatively simple manner Rayleigh (1980 derived it from the amplitude of sound resulting from many independent sources. This distribution is also connected with one or two dimensions and is sometimes referred to as “random walk” frequency distribution. The Rayleigh distribution can be derived from the bivariate normal distribution when the variate are independent and random with equal variances. We try to construct bivariate Rayleigh distribution with marginal Rayleigh distribution function and discuss its fundamental properties.

  15. Methodology to unmix spectrally similar minerals using high order derivative spectra

    Debba, Pravesh


    Full Text Available pure vanilla extract milk Table: Chocolate cake ingredients Debba (CSIR) Unmixing spectrally similar minerals Rhodes University 2009 8 / 40 Introduction to Unmixing Ingredients Quantity unsweetened chocolate 120 grams unsweetened cocoa powder 28... grams boiling water 240 ml flour 315 grams baking powder 2 teaspoons baking soda 1 teaspoon salt 1/4 teaspoon unsalted butter 226 grams white sugar 400 grams eggs 3 large pure vanilla extract 2 teaspoons milk 240 ml Table: Chocolate cake...

  16. Quadratic Blind Linear Unmixing: A Graphical User Interface for Tissue Characterization

    Gutierrez-Navarro, O.; Campos-Delgado, D.U.; Arce-Santana, E. R.; Jo, Javier A.


    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which imple...

  17. Complex Polynomial Vector Fields

    The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...... or meromorphic (allowing poles as singularities) functions. There already exists a well-developed theory for iterative holomorphic dynamical systems, and successful relations found between iteration theory and flows of vector fields have been one of the main motivations for the recent interest in holomorphic...... vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...

  18. Polynomial methods in combinatorics

    Guth, Larry


    This book explains some recent applications of the theory of polynomials and algebraic geometry to combinatorics and other areas of mathematics. One of the first results in this story is a short elegant solution of the Kakeya problem for finite fields, which was considered a deep and difficult problem in combinatorial geometry. The author also discusses in detail various problems in incidence geometry associated to Paul Erdős's famous distinct distances problem in the plane from the 1940s. The proof techniques are also connected to error-correcting codes, Fourier analysis, number theory, and differential geometry. Although the mathematics discussed in the book is deep and far-reaching, it should be accessible to first- and second-year graduate students and advanced undergraduates. The book contains approximately 100 exercises that further the reader's understanding of the main themes of the book. Some of the greatest advances in geometric combinatorics and harmonic analysis in recent years have been accompl...

  19. Polynomial representations of GLn

    Green, James A; Erdmann, Karin


    The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.

  20. Polynomial representations of GLN

    Green, James A


    The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.

  1. Efficient computation of Laguerre polynomials

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)


    textabstractAn efficient algorithm and a Fortran 90 module (LaguerrePol) for computing Laguerre polynomials . Ln(α)(z) are presented. The standard three-term recurrence relation satisfied by the polynomials and different types of asymptotic expansions valid for . n large and . α small, are used

  2. Optimization over polynomials : Selected topics

    Laurent, M.; Jang, Sun Young; Kim, Young Rock; Lee, Dae-Woong; Yie, Ikkwon


    Minimizing a polynomial function over a region defined by polynomial inequalities models broad classes of hard problems from combinatorics, geometry and optimization. New algorithmic approaches have emerged recently for computing the global minimum, by combining tools from real algebra (sums of

  3. An Affine Invariant Bivariate Version of the Sign Test.


    words: affine invariance, bivariate quantile, bivariate symmetry, model,. generalized median, influence function , permutation test, normal efficiency...calculate a bivariate version of the influence function , and the resulting form is bounded, as is the case for the univartate sign test, and shows the...terms of a blvariate analogue of IHmpel’s (1974) influence function . The latter, though usually defined as a von-Mises derivative of certain

  4. Unmixing of spectral components affecting AVIRIS imagery of Tampa Bay

    Carder, Kendall L.; Lee, Z. P.; Chen, Robert F.; Davis, Curtiss O.


    According to Kirk's as well as Morel and Gentili's Monte Carlo simulations, the popular simple expression, R approximately equals 0.33 bb/a, relating subsurface irradiance reflectance (R) to the ratio of the backscattering coefficient (bb) to absorption coefficient (a), is not valid for bb/a > 0.25. This means that it may no longer be valid for values of remote-sensing reflectance (above-surface ratio of water-leaving radiance to downwelling irradiance) where Rrs4/ > 0.01. Since there has been no simple Rrs expression developed for very turbid waters, we developed one based in part on Monte Carlo simulations and empirical adjustments to an Rrs model and applied it to rather turbid coastal waters near Tampa Bay to evaluate its utility for unmixing the optical components affecting the water- leaving radiance. With the high spectral (10 nm) and spatial (20 m2) resolution of Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS) data, the water depth and bottom type were deduced using the model for shallow waters. This research demonstrates the necessity of further research to improve interpretations of scenes with highly variable turbid waters, and it emphasizes the utility of high spectral-resolution data as from AVIRIS for better understanding complicated coastal environments such as the west Florida shelf.

  5. On generalized Fibonacci and Lucas polynomials

    Nalli, Ayse [Department of Mathematics, Faculty of Sciences, Selcuk University, 42075 Campus-Konya (Turkey)], E-mail:; Haukkanen, Pentti [Department of Mathematics, Statistics and Philosophy, 33014 University of Tampere (Finland)], E-mail:


    Let h(x) be a polynomial with real coefficients. We introduce h(x)-Fibonacci polynomials that generalize both Catalan's Fibonacci polynomials and Byrd's Fibonacci polynomials and also the k-Fibonacci numbers, and we provide properties for these h(x)-Fibonacci polynomials. We also introduce h(x)-Lucas polynomials that generalize the Lucas polynomials and present properties of these polynomials. In the last section we introduce the matrix Q{sub h}(x) that generalizes the Q-matrix whose powers generate the Fibonacci numbers.

  6. Parallel Construction of Irreducible Polynomials

    Frandsen, Gudmund Skovbjerg

    Let arithmetic pseudo-NC^k denote the problems that can be solved by log space uniform arithmetic circuits over the finite prime field GF(p) of depth O(log^k (n + p)) and size polynomial in (n + p). We show that the problem of constructing an irreducible polynomial of specified degree over GF(p) ...... of polynomials is in arithmetic NC^3. Our algorithm works over any field and compared to other known algorithms it does not assume the ability to take p'th roots when the field has characteristic p....

  7. Orthogonal polynomials in transport theories

    Dehesa, J.S.


    The asymptotical (k→infinity) behaviour of zeros of the polynomials gsub(k)sup((m)(ν)) encountered in the treatment of direct and inverse problems of scattering in neutron transport as well as radiative transfer theories is investigated in terms of the amplitude antiwsub(k) of the kth Legendre polynomial needed in the expansion of the scattering function. The parameters antiwsub(k) describe the anisotropy of scattering of the medium considered. In particular, it is shown that the asymptotical density of zeros of the polynomials gsub(k)sup(m)(ν) is an inverted semicircle for the anisotropic non-multiplying scattering medium

  8. The relative performance of bivariate causality tests in small samples

    Bult, J..R.; Leeflang, P.S.H.; Wittink, D.R.


    Causality tests have been applied to establish directional effects and to reduce the set of potential predictors, For the latter type of application only bivariate tests can be used, In this study we compare bivariate causality tests. Although the problem addressed is general and could benefit

  9. Stress-strength reliability for general bivariate distributions

    Alaa H. Abdel-Hamid


    Full Text Available An expression for the stress-strength reliability R=P(X1bivariate distribution. Such distribution includes bivariate compound Weibull, bivariate compound Gompertz, bivariate compound Pareto, among others. In the parametric case, the maximum likelihood estimates of the parameters and reliability function R are obtained. In the non-parametric case, point and interval estimates of R are developed using Govindarajulu's asymptotic distribution-free method when X1 and X2 are dependent. An example is given when the population distribution is bivariate compound Weibull. Simulation is performed, based on different sample sizes to study the performance of estimates.

  10. Julia Sets of Orthogonal Polynomials

    Christiansen, Jacob Stordal; Henriksen, Christian; Petersen, Henrik Laurberg


    For a probability measure with compact and non-polar support in the complex plane we relate dynamical properties of the associated sequence of orthogonal polynomials fPng to properties of the support. More precisely we relate the Julia set of Pn to the outer boundary of the support, the lled Julia...... set to the polynomial convex hull K of the support, and the Green's function associated with Pn to the Green's function for the complement of K....

  11. An introduction to orthogonal polynomials

    Chihara, Theodore S


    Assuming no further prerequisites than a first undergraduate course in real analysis, this concise introduction covers general elementary theory related to orthogonal polynomials. It includes necessary background material of the type not usually found in the standard mathematics curriculum. Suitable for advanced undergraduate and graduate courses, it is also appropriate for independent study. Topics include the representation theorem and distribution functions, continued fractions and chain sequences, the recurrence formula and properties of orthogonal polynomials, special functions, and some

  12. Scattering theory and orthogonal polynomials

    Geronimo, J.S.


    The application of the techniques of scattering theory to the study of polynomials orthogonal on the unit circle and a finite segment of the real line is considered. The starting point is the recurrence relations satisfied by the polynomials instead of the orthogonality condition. A set of two two terms recurrence relations for polynomials orthogonal on the real line is presented and used. These recurrence relations play roles analogous to those satisfied by polynomials orthogonal on unit circle. With these recurrence formulas a Wronskian theorem is proved and the Christoffel-Darboux formula is derived. In scattering theory a fundamental role is played by the Jost function. An analogy is deferred of this function and its analytic properties and the locations of its zeros investigated. The role of the analog Jost function in various properties of these orthogonal polynomials is investigated. The techniques of inverse scattering theory are also used. The discrete analogues of the Gelfand-Levitan and Marchenko equations are derived and solved. These techniques are used to calculate asymptotic formulas for the orthogonal polynomials. Finally Szego's theorem on toeplitz and Hankel determinants is proved using the recurrence formulas and some properties of the Jost function. The techniques of inverse scattering theory are used to calculate the correction terms

  13. GEMAS: Unmixing magnetic properties of European agricultural soil

    Fabian, Karl; Reimann, Clemens; Kuzina, Dilyara; Kosareva, Lina; Fattakhova, Leysan; Nurgaliev, Danis


    High resolution magnetic measurements provide new methods for world-wide characterization and monitoring of agricultural soil which is essential for quantifying geologic and human impact on the critical zone environment and consequences of climatic change, for planning economic and ecological land use, and for forensic applications. Hysteresis measurements of all Ap samples from the GEMAS survey yield a comprehensive overview of mineral magnetic properties in European agricultural soil on a continental scale. Low (460 Hz), and high frequency (4600 Hz) magnetic susceptibility k were measured using a Bartington MS2B sensor. Hysteresis properties were determined by a J-coercivity spectrometer, built at the paleomagnetic laboratory of Kazan University, providing for each sample a modified hysteresis loop, backfield curve, acquisition curve of isothermal remanent magnetization, and a viscous IRM decay spectrum. Each measurement set is obtained in a single run from zero field up to 1.5 T and back to -1.5 T. The resulting data are used to create the first continental-scale maps of magnetic soil parameters. Because the GEMAS geochemical atlas contains a comprehensive set of geochemical data for the same soil samples, the new data can be used to map magnetic parameters in relation to chemical and geological parameters. The data set also provides a unique opportunity to analyze the magnetic mineral fraction of the soil samples by unmixing their IRM acquisition curves. The endmember coefficients are interpreted by linear inversion for other magnetic, physical and chemical properties which results in an unprecedented and detailed view of the mineral magnetic composition of European agricultural soils.

  14. Catalytic Unmixed Combustion of Coal with Zero Pollution

    George Rizeq; Parag Kulkarni; Raul Subia; Wei Wei


    GE Global Research is developing an innovative energy-based technology for coal combustion with high efficiency and near-zero pollution. This Unmixed Combustion of coal (UMC-Coal) technology simultaneously converts coal, steam and air into two separate streams of high pressure CO{sub 2}-rich gas for sequestration, and high-temperature, high-pressure vitiated air for producing electricity in gas turbine expanders. The UMC process utilizes an oxygen transfer material (OTM) and eliminates the need for an air separation unit (ASU) and a CO{sub 2} separation unit as compared to conventional gasification based processes. This is the final report for the two-year DOE-funded program (DE-FC26-03NT41842) on this technology that ended in September 30, 2005. The UMC technology development program encompassed lab- and pilot-scale studies to demonstrate the UMC concept. The chemical feasibility of the individual UMC steps was established via lab-scale testing. A pilot plant, designed in a related DOE funded program (DE-FC26-00FT40974), was reconstructed and operated to demonstrate the chemistry of UMC process in a pilot-scale system. The risks associated with this promising technology including cost, lifetime and durability OTM and the impact of contaminants on turbine performance are currently being addressed in detail in a related ongoing DOE funded program (DE-FC26-00FT40974, Phase II). Results obtained to date suggest that this technology has the potential to economically meet future efficiency and environmental performance goals.

  15. Bannai-Ito polynomials and dressing chains

    Derevyagin, Maxim; Tsujimoto, Satoshi; Vinet, Luc; Zhedanov, Alexei


    Schur-Delsarte-Genin (SDG) maps and Bannai-Ito polynomials are studied. SDG maps are related to dressing chains determined by quadratic algebras. The Bannai-Ito polynomials and their kernel polynomials -- the complementary Bannai-Ito polynomials -- are shown to arise in the framework of the SDG maps.

  16. Birth-death processes and associated polynomials

    van Doorn, Erik A.


    We consider birth-death processes on the nonnegative integers and the corresponding sequences of orthogonal polynomials called birth-death polynomials. The sequence of associated polynomials linked with a sequence of birth-death polynomials and its orthogonalizing measure can be used in the analysis

  17. Non-Abelian integrable hierarchies: matrix biorthogonal polynomials and perturbations

    Ariznabarreta, Gerardo; García-Ardila, Juan C.; Mañas, Manuel; Marcellán, Francisco


    In this paper, Geronimus–Uvarov perturbations for matrix orthogonal polynomials on the real line are studied and then applied to the analysis of non-Abelian integrable hierarchies. The orthogonality is understood in full generality, i.e. in terms of a nondegenerate continuous sesquilinear form, determined by a quasidefinite matrix of bivariate generalized functions with a well-defined support. We derive Christoffel-type formulas that give the perturbed matrix biorthogonal polynomials and their norms in terms of the original ones. The keystone for this finding is the Gauss–Borel factorization of the Gram matrix. Geronimus–Uvarov transformations are considered in the context of the 2D non-Abelian Toda lattice and noncommutative KP hierarchies. The interplay between transformations and integrable flows is discussed. Miwa shifts, τ-ratio matrix functions and Sato formulas are given. Bilinear identities, involving Geronimus–Uvarov transformations, first for the Baker functions, then secondly for the biorthogonal polynomials and its second kind functions, and finally for the τ-ratio matrix functions, are found.


    Chastine Fatichah


    Full Text Available Bivariate Marginal Distribution Algorithm is extended from Estimation of Distribution Algorithm. This heuristic algorithm proposes the new approach for recombination of generate new individual that without crossover and mutation process such as genetic algorithm. Bivariate Marginal Distribution Algorithm uses connectivity variable the pair gene for recombination of generate new individual. Connectivity between variable is doing along optimization process. In this research, genetic algorithm performance with one point crossover is compared with Bivariate Marginal Distribution Algorithm performance in case Onemax, De Jong F2 function, and Traveling Salesman Problem. In this research, experimental results have shown performance the both algorithm is dependence of parameter respectively and also population size that used. For Onemax case with size small problem, Genetic Algorithm perform better with small number of iteration and more fast for get optimum result. However, Bivariate Marginal Distribution Algorithm perform better of result optimization for case Onemax with huge size problem. For De Jong F2 function, Genetic Algorithm perform better from Bivariate Marginal Distribution Algorithm of a number of iteration and time. For case Traveling Salesman Problem, Bivariate Marginal Distribution Algorithm have shown perform better from Genetic Algorithm of optimization result. Abstract in Bahasa Indonesia : Bivariate Marginal Distribution Algorithm merupakan perkembangan lebih lanjut dari Estimation of Distribution Algorithm. Algoritma heuristik ini mengenalkan pendekatan baru dalam melakukan rekombinasi untuk membentuk individu baru, yaitu tidak menggunakan proses crossover dan mutasi seperti pada Genetic Algorithm. Bivariate Marginal Distribution Algorithm menggunakan keterkaitan pasangan variabel dalam melakukan rekombinasi untuk membentuk individu baru. Keterkaitan antar variabel tersebut ditemukan selama proses optimasi berlangsung. Aplikasi yang

  19. On Multiple Polynomials of Capelli Type

    S.Y. Antonov


    Full Text Available This paper deals with the class of Capelli polynomials in free associative algebra F{Z} (where F is an arbitrary field, Z is a countable set generalizing the construction of multiple Capelli polynomials. The fundamental properties of the introduced Capelli polynomials are provided. In particular, decomposition of the Capelli polynomials by means of the same type of polynomials is shown. Furthermore, some relations between their T -ideals are revealed. A connection between double Capelli polynomials and Capelli quasi-polynomials is established.

  20. Bivariate discrete beta Kernel graduation of mortality data.

    Mazza, Angelo; Punzo, Antonio


    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  1. ℓ0 -based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation

    Xu, Xia; Shi, Zhenwei; Pan, Bin


    Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.

  2. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    Zheng, Yanting


    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  3. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    Zheng, Yanting; Yang, Jingping; Huang, Jianhua Z.


    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  4. Chromatic polynomials of random graphs

    Van Bussel, Frank; Fliegner, Denny; Timme, Marc; Ehrlich, Christoph; Stolzenberg, Sebastian


    Chromatic polynomials and related graph invariants are central objects in both graph theory and statistical physics. Computational difficulties, however, have so far restricted studies of such polynomials to graphs that were either very small, very sparse or highly structured. Recent algorithmic advances (Timme et al 2009 New J. Phys. 11 023001) now make it possible to compute chromatic polynomials for moderately sized graphs of arbitrary structure and number of edges. Here we present chromatic polynomials of ensembles of random graphs with up to 30 vertices, over the entire range of edge density. We specifically focus on the locations of the zeros of the polynomial in the complex plane. The results indicate that the chromatic zeros of random graphs have a very consistent layout. In particular, the crossing point, the point at which the chromatic zeros with non-zero imaginary part approach the real axis, scales linearly with the average degree over most of the density range. While the scaling laws obtained are purely empirical, if they continue to hold in general there are significant implications: the crossing points of chromatic zeros in the thermodynamic limit separate systems with zero ground state entropy from systems with positive ground state entropy, the latter an exception to the third law of thermodynamics.

  5. Cosmographic analysis with Chebyshev polynomials

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando


    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  6. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O


    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores

  7. Bivariational calculations for radiation transfer in an inhomogeneous participating media

    El Wakil, S.A.; Machali, H.M.; Haggag, M.H.; Attia, M.T.


    Equations for radiation transfer are obtained for dispersive media with space dependent albedo. Bivariational bound principle is used to calculate the reflection and transmission coefficients for such media. Numerical results are given and compared. (author)

  8. Comparison between two bivariate Poisson distributions through the ...

    These two models express themselves by their probability mass function. ... To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive.

  9. Polynomial weights and code constructions

    Massey, J; Costello, D; Justesen, Jørn


    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  10. Orthogonal Polynomials and Special Functions

    Assche, Walter


    The set of lectures from the Summer School held in Leuven in 2002 provide an up-to-date account of recent developments in orthogonal polynomials and special functions, in particular for algorithms for computer algebra packages, 3nj-symbols in representation theory of Lie groups, enumeration, multivariable special functions and Dunkl operators, asymptotics via the Riemann-Hilbert method, exponential asymptotics and the Stokes phenomenon. The volume aims at graduate students and post-docs working in the field of orthogonal polynomials and special functions, and in related fields interacting with orthogonal polynomials, such as combinatorics, computer algebra, asymptotics, representation theory, harmonic analysis, differential equations, physics. The lectures are self-contained requiring only a basic knowledge of analysis and algebra, and each includes many exercises.

  11. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Fang Zheng


    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  12. Symmetric functions and orthogonal polynomials

    Macdonald, I G


    One of the most classical areas of algebra, the theory of symmetric functions and orthogonal polynomials has long been known to be connected to combinatorics, representation theory, and other branches of mathematics. Written by perhaps the most famous author on the topic, this volume explains some of the current developments regarding these connections. It is based on lectures presented by the author at Rutgers University. Specifically, he gives recent results on orthogonal polynomials associated with affine Hecke algebras, surveying the proofs of certain famous combinatorial conjectures.


    Ali Jafari


    Full Text Available A new hyperspectral unmixing algorithm which considers endmember variability is presented. In the proposed algorithm, the endmembers are represented by correlated random vectors using the stochastic mixing model. Currently, there is no published theory for selecting the appropriate distribution for endmembers. The proposed algorithm first uses a linear transformation called material signature orthonormal mapping (MSOM, which transforms the endmembers to correlated Gaussian random vectors. The MSOM transformation reduces computational requirements by reducing the dimension and improves discrimination of endmembers by orthonormalizing the endmember mean vectors. In the original spectral space, the automated endmember bundles (AEB method extracts a set of spectra (endmember set for each material. The mean vector and covariance matrix of each endmember estimated directly from endmember sets in the MSOM space. Second, a new maximum likelihood method, called NCM_ML, is proposed which estimates abundances in the MSOM space using the normal compositional model (NCM. The proposed algorithm is evaluated and compared with other state-of-the-art unmixing algorithms using simulated and real hyperspectral images. Experimental results demonstrate that the proposed unmixing algorithm can unmix pixels composed of similar endmembers in hyperspectral images in the presence of spectral variability more accurately than previous methods.

  14. Bayesian Nonnegative Matrix Factorization with Volume Prior for Unmixing of Hyperspectral Images

    Arngren, Morten; Schmidt, Mikkel Nørgaard; Larsen, Jan


    based unmixing algorithms are based on sparsity regularization encouraging pure spectral endmembers, but this is not optimal for certain applications, such as foods, where abundances are not sparse. The pixels will theoretically lie on a simplex and hence the endmembers can be estimated as the vertices...

  15. A comparison of STARFM and an unmixing-based algorithm for Landsat and MODIS data fusion

    Gevaert, C.M.; Garcia-Haro, F.J.


    The focus of the current study is to compare data fusion methods applied to sensors with medium- and high-spatial resolutions. Two documented methods are applied, the spatial and temporal adaptive reflectance fusion model (STARFM) and an unmixing-based method which proposes a Bayesian formulation to

  16. Encoding Strategy Changes and Spacing Effects in the Free Recall of Unmixed Lists

    Delaney, P.F.; Knowles, M.E.


    Memory for repeated items often improves when repetitions are separated by other items-a phenomenon called the spacing effect. In two experiments, we explored the complex interaction between study strategies, serial position, and spacing effects. When people studied several unmixed lists, they initially used mainly rote rehearsal, but some people…




    Full Text Available To analyze the stability of a linear system of differential equations  ẋ = Ax we can study the location of the roots of the characteristic polynomial pA(t associated with the matrix A. We present various criteria - algebraic and geometric - that help us to determine where the roots are located without calculating them directly.

  18. On Modular Counting with Polynomials

    Hansen, Kristoffer Arnsfelt


    For any integers m and l, where m has r sufficiently large (depending on l) factors, that are powers of r distinct primes, we give a construction of a (symmetric) polynomial over Z_m of degree O(\\sqrt n) that is a generalized representation (commonly also called weak representation) of the MODl f...

  19. Global Polynomial Kernel Hazard Estimation

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch


    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  20. Congruences concerning Legendre polynomials III

    Sun, Zhi-Hong


    Let $p>3$ be a prime, and let $R_p$ be the set of rational numbers whose denominator is coprime to $p$. Let $\\{P_n(x)\\}$ be the Legendre polynomials. In this paper we mainly show that for $m,n,t\\in R_p$ with $m\

  1. Two polynomial division inequalities in

    Goetgheluck P


    Full Text Available This paper is a first attempt to give numerical values for constants and , in classical estimates and where is an algebraic polynomial of degree at most and denotes the -metric on . The basic tools are Markov and Bernstein inequalities.

  2. Dirichlet polynomials, majorization, and trumping

    Pereira, Rajesh; Plosker, Sarah


    Majorization and trumping are two partial orders which have proved useful in quantum information theory. We show some relations between these two partial orders and generalized Dirichlet polynomials, Mellin transforms, and completely monotone functions. These relations are used to prove a succinct generalization of Turgut’s characterization of trumping. (paper)

  3. The modified Gauss diagonalization of polynomial matrices

    Saeed, K.


    The Gauss algorithm for diagonalization of constant matrices is modified for application to polynomial matrices. Due to this modification the diagonal elements become pure polynomials rather than rational functions. (author)

  4. Sheffer and Non-Sheffer Polynomial Families

    G. Dattoli


    Full Text Available By using the integral transform method, we introduce some non-Sheffer polynomial sets. Furthermore, we show how to compute the connection coefficients for particular expressions of Appell polynomials.

  5. The finite Fourier transform of classical polynomials

    Dixit, Atul; Jiu, Lin; Moll, Victor H.; Vignat, Christophe


    The finite Fourier transform of a family of orthogonal polynomials $A_{n}(x)$, is the usual transform of the polynomial extended by $0$ outside their natural domain. Explicit expressions are given for the Legendre, Jacobi, Gegenbauer and Chebyshev families.

  6. A Summation Formula for Macdonald Polynomials

    de Gier, Jan; Wheeler, Michael


    We derive an explicit sum formula for symmetric Macdonald polynomials. Our expression contains multiple sums over the symmetric group and uses the action of Hecke generators on the ring of polynomials. In the special cases {t = 1} and {q = 0}, we recover known expressions for the monomial symmetric and Hall-Littlewood polynomials, respectively. Other specializations of our formula give new expressions for the Jack and q-Whittaker polynomials.

  7. A New Generalisation of Macdonald Polynomials

    Garbali, Alexandr; de Gier, Jan; Wheeler, Michael


    We introduce a new family of symmetric multivariate polynomials, whose coefficients are meromorphic functions of two parameters ( q, t) and polynomial in a further two parameters ( u, v). We evaluate these polynomials explicitly as a matrix product. At u = v = 0 they reduce to Macdonald polynomials, while at q = 0, u = v = s they recover a family of inhomogeneous symmetric functions originally introduced by Borodin.

  8. Associated polynomials and birth-death processes

    van Doorn, Erik A.


    We consider sequences of orthogonal polynomials with positive zeros, and pursue the question of how (partial) knowledge of the orthogonalizing measure for the {\\it associated polynomials} can lead to information about the orthogonalizing measure for the original polynomials, with a view to

  9. BSDEs with polynomial growth generators

    Philippe Briand


    Full Text Available In this paper, we give existence and uniqueness results for backward stochastic differential equations when the generator has a polynomial growth in the state variable. We deal with the case of a fixed terminal time, as well as the case of random terminal time. The need for this type of extension of the classical existence and uniqueness results comes from the desire to provide a probabilistic representation of the solutions of semilinear partial differential equations in the spirit of a nonlinear Feynman-Kac formula. Indeed, in many applications of interest, the nonlinearity is polynomial, e.g, the Allen-Cahn equation or the standard nonlinear heat and Schrödinger equations.

  10. Quantum entanglement via nilpotent polynomials

    Mandilara, Aikaterini; Akulin, Vladimir M.; Smilga, Andrei V.; Viola, Lorenza


    We propose a general method for introducing extensive characteristics of quantum entanglement. The method relies on polynomials of nilpotent raising operators that create entangled states acting on a reference vacuum state. By introducing the notion of tanglemeter, the logarithm of the state vector represented in a special canonical form and expressed via polynomials of nilpotent variables, we show how this description provides a simple criterion for entanglement as well as a universal method for constructing the invariants characterizing entanglement. We compare the existing measures and classes of entanglement with those emerging from our approach. We derive the equation of motion for the tanglemeter and, in representative examples of up to four-qubit systems, show how the known classes appear in a natural way within our framework. We extend our approach to qutrits and higher-dimensional systems, and make contact with the recently introduced idea of generalized entanglement. Possible future developments and applications of the method are discussed

  11. Special polynomials associated with some hierarchies

    Kudryashov, Nikolai A.


    Special polynomials associated with rational solutions of a hierarchy of equations of Painleve type are introduced. The hierarchy arises by similarity reduction from the Fordy-Gibbons hierarchy of partial differential equations. Some relations for these special polynomials are given. Differential-difference hierarchies for finding special polynomials are presented. These formulae allow us to obtain special polynomials associated with the hierarchy studied. It is shown that rational solutions of members of the Schwarz-Sawada-Kotera, the Schwarz-Kaup-Kupershmidt, the Fordy-Gibbons, the Sawada-Kotera and the Kaup-Kupershmidt hierarchies can be expressed through special polynomials of the hierarchy studied

  12. Space complexity in polynomial calculus

    Filmus, Y.; Lauria, M.; Nordström, J.; Ron-Zewi, N.; Thapen, Neil


    Roč. 44, č. 4 (2015), s. 1119-1153 ISSN 0097-5397 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : proof complexity * polynomial calculus * lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.841, year: 2015

  13. Codimensions of generalized polynomial identities

    Gordienko, Aleksei S


    It is proved that for every finite-dimensional associative algebra A over a field of characteristic zero there are numbers C element of Q + and t element of Z + such that gc n (A)∼Cn t d n as n→∞, where d=PI exp(A) element of Z + . Thus, Amitsur's and Regev's conjectures hold for the codimensions gc n (A) of the generalized polynomial identities. Bibliography: 6 titles.

  14. Stable piecewise polynomial vector fields

    Claudio Pessoa


    Full Text Available Let $N={y>0}$ and $S={y<0}$ be the semi-planes of $mathbb{R}^2$ having as common boundary the line $D={y=0}$. Let $X$ and $Y$ be polynomial vector fields defined in $N$ and $S$, respectively, leading to a discontinuous piecewise polynomial vector field $Z=(X,Y$. This work pursues the stability and the transition analysis of solutions of $Z$ between $N$ and $S$, started by Filippov (1988 and Kozlova (1984 and reformulated by Sotomayor-Teixeira (1995 in terms of the regularization method. This method consists in analyzing a one parameter family of continuous vector fields $Z_{epsilon}$, defined by averaging $X$ and $Y$. This family approaches $Z$ when the parameter goes to zero. The results of Sotomayor-Teixeira and Sotomayor-Machado (2002 providing conditions on $(X,Y$ for the regularized vector fields to be structurally stable on planar compact connected regions are extended to discontinuous piecewise polynomial vector fields on $mathbb{R}^2$. Pertinent genericity results for vector fields satisfying the above stability conditions are also extended to the present case. A procedure for the study of discontinuous piecewise vector fields at infinity through a compactification is proposed here.

  15. Spectral Mixture Analysis: Linear and Semi-parametric Full and Iterated Partial Unmixing in Multi- and Hyperspectral Image Data

    Nielsen, Allan Aasbjerg


    ) and non-negative least squares (NNLS), and the partial unmixing methods orthogonal subspace projection (OSP), constrained energy minimization (CEM) and an eigenvalue formulation alternative are dealt with. The solution to the eigenvalue formulation alternative proves to be identical to the CEM solution....... The matrix inversion involved in CEM can be avoided by working on (a subset of) orthogonally transformed data such as signal maximum autocorrelation factors, MAFs, or signal minimum noise fractions, MNFs. This will also cause the partial unmixing result to be independent of the noise isolated in the MAF....../MNFs not included in the analysis. CEM and the eigenvalue formulation alternative enable us to perform partial unmixing when we know one desired end-member spectrum only and not the full set of end-member spectra. This is an advantage over full unmixing and OSP. The eigenvalue formulation of CEM inspires us...

  16. Optimizing an objective function under a bivariate probability model

    X. Brusset; N.M. Temme (Nico)


    htmlabstractThe motivation of this paper is to obtain an analytical closed form of a quadratic objective function arising from a stochastic decision process with bivariate exponential probability distribution functions that may be dependent. This method is applicable when results need to be

  17. GIS-Based bivariate statistical techniques for groundwater potential ...


    This study shows the potency of two GIS-based data driven bivariate techniques namely ... In the view of these weaknesses , there is a strong requirement for reassessment of .... Font color: Text 1, Not Expanded by / Condensed by , ...... West Bengal (India) using remote sensing, geographical information system and multi-.

  18. Assessing the copula selection for bivariate frequency analysis ...


    Copulas are applied to overcome the restriction of traditional bivariate frequency ... frequency analysis methods cannot describe the random variable properties that ... In order to overcome the limitation of multivariate distributions, a copula is a ..... The Mann-Kendall (M-K) test is a non-parametric statistical test which is used ...

  19. A New Measure Of Bivariate Asymmetry And Its Evaluation

    Ferreira, Flavio Henn; Kolev, Nikolai Valtchev


    In this paper we propose a new measure of bivariate asymmetry, based on conditional correlation coefficients. A decomposition of the Pearson correlation coefficient in terms of its conditional versions is studied and an example of application of the proposed measure is given.

  20. Building Bivariate Tables: The compareGroups Package for R

    Isaac Subirana


    Full Text Available The R package compareGroups provides functions meant to facilitate the construction of bivariate tables (descriptives of several variables for comparison between groups and generates reports in several formats (LATEX, HTML or plain text CSV. Moreover, bivariate tables can be viewed directly on the R console in a nice format. A graphical user interface (GUI has been implemented to build the bivariate tables more easily for those users who are not familiar with the R software. Some new functions and methods have been incorporated in the newest version of the compareGroups package (version 1.x to deal with time-to-event variables, stratifying tables, merging several tables, and revising the statistical methods used. The GUI interface also has been improved, making it much easier and more intuitive to set the inputs for building the bivariate tables. The ?rst version (version 0.x and this version were presented at the 2010 useR! conference (Sanz, Subirana, and Vila 2010 and the 2011 useR! conference (Sanz, Subirana, and Vila 2011, respectively. Package compareGroups is available from the Comprehensive R Archive Network at

  1. About some properties of bivariate splines with shape parameters

    Caliò, F.; Marchetti, E.


    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  2. Hyperspectral clustering and unmixing for studying the ecology of state formation and complex societies

    Kwong, Justin D.; Messinger, David W.; Middleton, William D.


    This project is an application of hyperspectral classification and unmixing in support of an ongoing archaeological study. The study region is the Oaxaca Valley located in the state of Oaxaca, Mexico on the southern coast. This was the birthplace of the Zapotec civilization which grew into a complex state level society. Hyperion imagery is being collected over a 30,000 km2 area. Classification maps of regions of interest are generated using K-means clustering and a novel algorithm called Gradient Flow. Gradient Flow departs from conventional stochastic or deterministic approaches, using graph theory to cluster spectral data. Spectral unmixing is conducted using the RIT developed algorithm Max-D to automatically find end members. Stepwise unmixing is performed to better model the data using the end members found be Max-D. Data are efficiently shared between imaging scientists and archaeologists using Google Earth to stream images over the internet rather than downloading them. The overall goal of the project is to provide archaeologists with useful information maps without having to interpret the raw data.

  3. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    Ai, Jian-chao; Wang, Ning; Yang, Jing


    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  4. Quadratic blind linear unmixing: A graphical user interface for tissue characterization.

    Gutierrez-Navarro, O; Campos-Delgado, D U; Arce-Santana, E R; Jo, Javier A


    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Algebraic polynomials with random coefficients

    K. Farahmand


    Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.

  6. Improved multivariate polynomial factoring algorithm

    Wang, P.S.


    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  7. Fourier series and orthogonal polynomials

    Jackson, Dunham


    This text for undergraduate and graduate students illustrates the fundamental simplicity of the properties of orthogonal functions and their developments in related series. Starting with a definition and explanation of the elements of Fourier series, the text follows with examinations of Legendre polynomials and Bessel functions. Boundary value problems consider Fourier series in conjunction with Laplace's equation in an infinite strip and in a rectangle, with a vibrating string, in three dimensions, in a sphere, and in other circumstances. An overview of Pearson frequency functions is followe

  8. Killings, duality and characteristic polynomials

    Álvarez, Enrique; Borlaf, Javier; León, José H.


    In this paper the complete geometrical setting of (lowest order) abelian T-duality is explored with the help of some new geometrical tools (the reduced formalism). In particular, all invariant polynomials (the integrands of the characteristic classes) can be explicitly computed for the dual model in terms of quantities pertaining to the original one and with the help of the canonical connection whose intrinsic characterization is given. Using our formalism the physically, and T-duality invariant, relevant result that top forms are zero when there is an isometry without fixed points is easily proved. © 1998

  9. Orthogonal polynomials and random matrices

    Deift, Percy


    This volume expands on a set of lectures held at the Courant Institute on Riemann-Hilbert problems, orthogonal polynomials, and random matrix theory. The goal of the course was to prove universality for a variety of statistical quantities arising in the theory of random matrix models. The central question was the following: Why do very general ensembles of random n {\\times} n matrices exhibit universal behavior as n {\\rightarrow} {\\infty}? The main ingredient in the proof is the steepest descent method for oscillatory Riemann-Hilbert problems.

  10. Introduction to Real Orthogonal Polynomials


    uses Green’s functions. As motivation , consider the Dirichlet problem for the unit circle in the plane, which involves finding a harmonic function u(r...xv ; a, b ; q) - TO [q-N ab+’q ; q, xq b. Orthogoy RMotion O0 (bq :q)x p.(q* ; a, b ; q) pg(q’ ; a, b ; q) (q "q), (aq)x (q ; q), (I -abq) (bq ; q... motivation and justi- fication for continued study of the intrinsic structure of orthogonal polynomials. 99 LIST OF REFERENCES 1. Deyer, W. M., ed., CRC

  11. An uncertainty inclusive un-mixing model to identify tracer non-conservativeness

    Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.


    Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (affected whereas uncertainty was only marginally impacted by the corrupted tracer. Improvement of uncertainty resulted from increasing the number of tracers in both the perfect and corrupted datasets. FR2000 was capable of detecting non-conservative tracer behaviour within the range of mean source values, therefore, it provided a more

  12. A companion matrix for 2-D polynomials

    Boudellioua, M.S.


    In this paper, a matrix form analogous to the companion matrix which is often encountered in the theory of one dimensional (1-D) linear systems is suggested for a class of polynomials in two indeterminates and real coefficients, here referred to as two dimensional (2-D) polynomials. These polynomials arise in the context of 2-D linear systems theory. Necessary and sufficient conditions are also presented under which a matrix is equivalent to this companion form. (author). 6 refs

  13. On polynomial solutions of the Heun equation

    Gurappa, N; Panigrahi, Prasanta K


    By making use of a recently developed method to solve linear differential equations of arbitrary order, we find a wide class of polynomial solutions to the Heun equation. We construct the series solution to the Heun equation before identifying the polynomial solutions. The Heun equation extended by the addition of a term, -σ/x, is also amenable for polynomial solutions. (letter to the editor)

  14. A new Arnoldi approach for polynomial eigenproblems

    Raeven, F.A.


    In this paper we introduce a new generalization of the method of Arnoldi for matrix polynomials. The new approach is compared with the approach of rewriting the polynomial problem into a linear eigenproblem and applying the standard method of Arnoldi to the linearised problem. The algorithm that can be applied directly to the polynomial eigenproblem turns out to be more efficient, both in storage and in computation.

  15. Bayer Demosaicking with Polynomial Interpolation.

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil


    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  16. Fermionic formula for double Kostka polynomials

    Liu, Shiyuan


    The $X=M$ conjecture asserts that the $1D$ sum and the fermionic formula coincide up to some constant power. In the case of type $A,$ both the $1D$ sum and the fermionic formula are closely related to Kostka polynomials. Double Kostka polynomials $K_{\\Bla,\\Bmu}(t),$ indexed by two double partitions $\\Bla,\\Bmu,$ are polynomials in $t$ introduced as a generalization of Kostka polynomials. In the present paper, we consider $K_{\\Bla,\\Bmu}(t)$ in the special case where $\\Bmu=(-,\\mu'').$ We formula...

  17. Polynomial sequences generated by infinite Hessenberg matrices

    Verde-Star Luis


    Full Text Available We show that an infinite lower Hessenberg matrix generates polynomial sequences that correspond to the rows of infinite lower triangular invertible matrices. Orthogonal polynomial sequences are obtained when the Hessenberg matrix is tridiagonal. We study properties of the polynomial sequences and their corresponding matrices which are related to recurrence relations, companion matrices, matrix similarity, construction algorithms, and generating functions. When the Hessenberg matrix is also Toeplitz the polynomial sequences turn out to be of interpolatory type and we obtain additional results. For example, we show that every nonderogative finite square matrix is similar to a unique Toeplitz-Hessenberg matrix.

  18. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Jacques Duchêne


    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  19. Bivariate extreme value with application to PM10 concentration analysis

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin


    This study is focus on a bivariate extreme of renormalized componentwise maxima with generalized extreme value distribution as a marginal function. The limiting joint distribution of several parametric models are presented. Maximum likelihood estimation is employed for parameter estimations and the best model is selected based on the Akaike Information Criterion. The weekly and monthly componentwise maxima series are extracted from the original observations of daily maxima PM10 data for two air quality monitoring stations located in Pasir Gudang and Johor Bahru. The 10 years data are considered for both stations from year 2001 to 2010. The asymmetric negative logistic model is found as the best fit bivariate extreme model for both weekly and monthly maxima componentwise series. However the dependence parameters show that the variables for weekly maxima series is more dependence to each other compared to the monthly maxima.

  20. Probability distributions with truncated, log and bivariate extensions

    Thomopoulos, Nick T


    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  1. Chain Plot: A Tool for Exploiting Bivariate Temporal Structures

    Taylor, CC; Zempeni, A


    In this paper we present a graphical tool useful for visualizing the cyclic behaviour of bivariate time series. We investigate its properties and link it to the asymmetry of the two variables concerned. We also suggest adding approximate confidence bounds to the points on the plot and investigate the effect of lagging to the chain plot. We conclude our paper by some standard Fourier analysis, relating and comparing this to the chain plot.

  2. Spectrum-based estimators of the bivariate Hurst exponent

    Krištoufek, Ladislav


    Roč. 90, č. 6 (2014), art. 062802 ISSN 1539-3755 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : bivariate Hurst exponent * power- law cross-correlations * estimation Subject RIV: AH - Economics Impact factor: 2.288, year: 2014

  3. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    Hong, X; Harris, C J


    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  4. Polynomials formalism of quantum numbers

    Kazakov, K.V.


    Theoretical aspects of the recently suggested perturbation formalism based on the method of quantum number polynomials are considered in the context of the general anharmonicity problem. Using a biatomic molecule by way of example, it is demonstrated how the theory can be extrapolated to the case of vibrational-rotational interactions. As a result, an exact expression for the first coefficient of the Herman-Wallis factor is derived. In addition, the basic notions of the formalism are phenomenologically generalized and expanded to the problem of spin interaction. The concept of magneto-optical anharmonicity is introduced. As a consequence, an exact analogy is drawn with the well-known electro-optical theory of molecules, and a nonlinear dependence of the magnetic dipole moment of the system on the spin and wave variables is established [ru

  5. Polynomial solutions of nonlinear integral equations

    Dominici, Diego


    We analyze the polynomial solutions of a nonlinear integral equation, generalizing the work of Bender and Ben-Naim (2007 J. Phys. A: Math. Theor. 40 F9, 2008 J. Nonlinear Math. Phys. 15 (Suppl. 3) 73). We show that, in some cases, an orthogonal solution exists and we give its general form in terms of kernel polynomials

  6. Sibling curves of quadratic polynomials | Wiggins | Quaestiones ...

    Sibling curves were demonstrated in [1, 2] as a novel way to visualize the zeroes of real valued functions. In [3] it was shown that a polynomial of degree n has n sibling curves. This paper focuses on the algebraic and geometric properites of the sibling curves of real and complex quadratic polynomials. Key words: Quadratic ...

  7. Topological string partition functions as polynomials

    Yamaguchi, Satoshi; Yau Shingtung


    We investigate the structure of the higher genus topological string amplitudes on the quintic hypersurface. It is shown that the partition functions of the higher genus than one can be expressed as polynomials of five generators. We also compute the explicit polynomial forms of the partition functions for genus 2, 3, and 4. Moreover, some coefficients are written down for all genus. (author)

  8. Polynomial solutions of nonlinear integral equations

    Dominici, Diego [Department of Mathematics, State University of New York at New Paltz, 1 Hawk Dr. Suite 9, New Paltz, NY 12561-2443 (United States)], E-mail:


    We analyze the polynomial solutions of a nonlinear integral equation, generalizing the work of Bender and Ben-Naim (2007 J. Phys. A: Math. Theor. 40 F9, 2008 J. Nonlinear Math. Phys. 15 (Suppl. 3) 73). We show that, in some cases, an orthogonal solution exists and we give its general form in terms of kernel polynomials.

  9. A generalization of the Bernoulli polynomials

    Pierpaolo Natalini


    Full Text Available A generalization of the Bernoulli polynomials and, consequently, of the Bernoulli numbers, is defined starting from suitable generating functions. Furthermore, the differential equations of these new classes of polynomials are derived by means of the factorization method introduced by Infeld and Hull (1951.

  10. The Bessel polynomials and their differential operators

    Onyango Otieno, V.P.


    Differential operators associated with the ordinary and the generalized Bessel polynomials are defined. In each case the commutator bracket is constructed and shows that the differential operators associated with the Bessel polynomials and their generalized form are not commutative. Some applications of these operators to linear differential equations are also discussed. (author). 4 refs

  11. Large degree asymptotics of generalized Bessel polynomials

    J.L. López; N.M. Temme (Nico)


    textabstractAsymptotic expansions are given for large values of $n$ of the generalized Bessel polynomials $Y_n^\\mu(z)$. The analysis is based on integrals that follow from the generating functions of the polynomials. A new simple expansion is given that is valid outside a compact neighborhood of the

  12. Exceptional polynomials and SUSY quantum mechanics

    Abstract. We show that for the quantum mechanical problem which admit classical Laguerre/. Jacobi polynomials as solutions for the Schrödinger equations (SE), will also admit exceptional. Laguerre/Jacobi polynomials as solutions having the same eigenvalues but with the ground state missing after a modification of the ...

  13. Connections between the matching and chromatic polynomials

    E. J. Farrell


    Full Text Available The main results established are (i a connection between the matching and chromatic polynomials and (ii a formula for the matching polynomial of a general complement of a subgraph of a graph. Some deductions on matching and chromatic equivalence and uniqueness are made.

  14. Laguerre polynomials by a harmonic oscillator

    Baykal, Melek; Baykal, Ahmet


    The study of an isotropic harmonic oscillator, using the factorization method given in Ohanian's textbook on quantum mechanics, is refined and some collateral extensions of the method related to the ladder operators and the associated Laguerre polynomials are presented. In particular, some analytical properties of the associated Laguerre polynomials are derived using the ladder operators.

  15. Laguerre polynomials by a harmonic oscillator

    Baykal, Melek; Baykal, Ahmet


    The study of an isotropic harmonic oscillator, using the factorization method given in Ohanian's textbook on quantum mechanics, is refined and some collateral extensions of the method related to the ladder operators and the associated Laguerre polynomials are presented. In particular, some analytical properties of the associated Laguerre polynomials are derived using the ladder operators. (paper)

  16. On Generalisation of Polynomials in Complex Plane

    Maslina Darus


    Full Text Available The generalised Bell and Laguerre polynomials of fractional-order in complex z-plane are defined. Some properties are studied. Moreover, we proved that these polynomials are univalent solutions for second order differential equations. Also, the Laguerre-type of some special functions are introduced.

  17. Dual exponential polynomials and linear differential equations

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne


    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  18. Technique for image interpolation using polynomial transforms

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.


    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  19. Factoring polynomials over arbitrary finite fields

    Lange, T.; Winterhof, A.


    We analyse an extension of Shoup's (Inform. Process. Lett. 33 (1990) 261–267) deterministic algorithm for factoring polynomials over finite prime fields to arbitrary finite fields. In particular, we prove the existence of a deterministic algorithm which completely factors all monic polynomials of

  20. Application of polynomial preconditioners to conservation laws

    Geurts, Bernardus J.; van Buuren, R.; Lu, H.


    Polynomial preconditioners which are suitable in implicit time-stepping methods for conservation laws are reviewed and analyzed. The preconditioners considered are either based on a truncation of a Neumann series or on Chebyshev polynomials for the inverse of the system-matrix. The latter class of

  1. On the number of polynomial solutions of Bernoulli and Abel polynomial differential equations

    Cima, A.; Gasull, A.; Mañosas, F.


    In this paper we determine the maximum number of polynomial solutions of Bernoulli differential equations and of some integrable polynomial Abel differential equations. As far as we know, the tools used to prove our results have not been utilized before for studying this type of questions. We show that the addressed problems can be reduced to know the number of polynomial solutions of a related polynomial equation of arbitrary degree. Then we approach to these equations either applying several tools developed to study extended Fermat problems for polynomial equations, or reducing the question to the computation of the genus of some associated planar algebraic curves.

  2. Matrix product formula for Macdonald polynomials

    Cantini, Luigi; de Gier, Jan; Wheeler, Michael


    We derive a matrix product formula for symmetric Macdonald polynomials. Our results are obtained by constructing polynomial solutions of deformed Knizhnik-Zamolodchikov equations, which arise by considering representations of the Zamolodchikov-Faddeev and Yang-Baxter algebras in terms of t-deformed bosonic operators. These solutions are generalized probabilities for particle configurations of the multi-species asymmetric exclusion process, and form a basis of the ring of polynomials in n variables whose elements are indexed by compositions. For weakly increasing compositions (anti-dominant weights), these basis elements coincide with non-symmetric Macdonald polynomials. Our formulas imply a natural combinatorial interpretation in terms of solvable lattice models. They also imply that normalizations of stationary states of multi-species exclusion processes are obtained as Macdonald polynomials at q = 1.

  3. Matrix product formula for Macdonald polynomials

    Cantini, Luigi; Gier, Jan de; Michael Wheeler


    We derive a matrix product formula for symmetric Macdonald polynomials. Our results are obtained by constructing polynomial solutions of deformed Knizhnik–Zamolodchikov equations, which arise by considering representations of the Zamolodchikov–Faddeev and Yang–Baxter algebras in terms of t-deformed bosonic operators. These solutions are generalized probabilities for particle configurations of the multi-species asymmetric exclusion process, and form a basis of the ring of polynomials in n variables whose elements are indexed by compositions. For weakly increasing compositions (anti-dominant weights), these basis elements coincide with non-symmetric Macdonald polynomials. Our formulas imply a natural combinatorial interpretation in terms of solvable lattice models. They also imply that normalizations of stationary states of multi-species exclusion processes are obtained as Macdonald polynomials at q = 1. (paper)

  4. Arabic text classification using Polynomial Networks

    Mayy M. Al-Tahrawi


    Full Text Available In this paper, an Arabic statistical learning-based text classification system has been developed using Polynomial Neural Networks. Polynomial Networks have been recently applied to English text classification, but they were never used for Arabic text classification. In this research, we investigate the performance of Polynomial Networks in classifying Arabic texts. Experiments are conducted on a widely used Arabic dataset in text classification: Al-Jazeera News dataset. We chose this dataset to enable direct comparisons of the performance of Polynomial Networks classifier versus other well-known classifiers on this dataset in the literature of Arabic text classification. Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.

  5. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    Williams, McKay D.

    Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce

  6. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice


    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  7. Arctic lead detection using a waveform unmixing algorithm from CryoSat-2 data

    Lee, S.; Im, J.


    Arctic areas consist of ice floes, leads, and polynyas. While leads and polynyas account for small parts in the Arctic Ocean, they play a key role in exchanging heat flux, moisture, and momentum between the atmosphere and ocean in wintertime because of their huge temperature difference In this study, a linear waveform unmixing approach was proposed to detect lead fraction. CryoSat-2 waveforms for pure leads, sea ice, and ocean were used as end-members based on visual interpretation of MODIS images coincident with CryoSat-2 data. The unmixing model produced lead, sea ice, and ocean abundances and a threshold (> 0.7) was applied to make a binary classification between lead and sea ice. The unmixing model produced better results than the existing models in the literature, which are based on simple thresholding approaches. The results were also comparable with our previous research using machine learning based models (i.e., decision trees and random forest). A monthly lead fraction was calculated, dividing the number of detected leads by the total number of measurements. The lead fraction around Beaufort Sea and Fram strait was high due to the anti-cyclonic rotation of Beaufort Gyre and the outflows of sea ice to the Atlantic. The lead fraction maps produced in this study were matched well with monthly lead fraction maps in the literature. The areas with thin sea ice identified from our previous research correspond to the high lead fraction areas in the present study. Furthermore, sea ice roughness from ASCAT scatterometer was compared to a lead fraction map to see the relationship between surface roughness and lead distribution.

  8. Imaging the distribution of photoswitchable probes with temporally-unmixed multispectral optoacoustic tomography

    Deán-Ben, X. Luís.; Stiel, Andre C.; Jiang, Yuanyuan; Ntziachristos, Vasilis; Westmeyer, Gil G.; Razansky, Daniel


    Synthetic and genetically encoded chromo- and fluorophores have become indispensable tools for biomedical research enabling a myriad of applications in imaging modalities based on biomedical optics. The versatility offered by the optoacoustic (photoacoustic) contrast mechanism enables to detect signals from any substance absorbing light, and hence these probes can be used as optoacoustic contrast agents. While contrast versatility generally represents an advantage of optoacoustics, the strong background signal generated by light absorption in endogeneous chromophores hampers the optoacoustic capacity to detect a photo-absorbing agent of interest. Increasing the optoacoustic sensitivity is then determined by the capability to differentiate specific features of such agent. For example, multispectral optoacoustic tomography (MSOT) exploits illuminating the tissue at multiple optical wavelengths to spectrally resolve (unmix) the contribution of different chromophores. Herein, we present an alternative approach to enhance the sensitivity and specificity in the detection of optoacoustic contrast agents. This is achieved with photoswitchable probes that change optical absorption upon illumination with specific optical wavelengths. Thereby, temporally unmixed MSOT (tuMSOT) is based on photoswitching the compounds according to defined schedules to elicit specific time-varying optoacoustic signals, and then use temporal unmixing algorithms to locate the contrast agent based on their particular temporal profile. The photoswitching kinetics is further affected by light intensity, so that tuMSOT can be employed to estimate the light fluence distribution in a biological sample. The performance of the method is demonstrated herein with the reversibly switchable fluorescent protein Dronpa and its fast-switching fatigue resistant variant Dronpa-M159T.

  9. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.


    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the

  10. Computational approach to Thornley's problem by bivariate operational calculus

    Bazhlekova, E.; Dimovski, I.


    Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.

  11. on the performance of Autoregressive Moving Average Polynomial

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  12. Bivariate generalized Pareto distribution for extreme atmospheric particulate matter

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin


    The high particulate matter (PM10) level is the prominent issue causing various impacts to human health and seriously affecting the economics. The asymptotic theory of extreme value is apply for analyzing the relation of extreme PM10 data from two nearby air quality monitoring stations. The series of daily maxima PM10 for Johor Bahru and Pasir Gudang stations are consider for year 2001 to 2010 databases. The 85% and 95% marginal quantile apply to determine the threshold values and hence construct the series of exceedances over the chosen threshold. The logistic, asymmetric logistic, negative logistic and asymmetric negative logistic models areconsidered as the dependence function to the joint distribution of a bivariate observation. Maximum likelihood estimation is employed for parameter estimations. The best fitted model is chosen based on the Akaike Information Criterion and the quantile plots. It is found that the asymmetric logistic model gives the best fitted model for bivariate extreme PM10 data and shows the weak dependence between two stations.

  13. Neck curve polynomials in neck rupture model

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul


    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of 280 X 90 with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  14. Rotational Spectral Unmixing of Exoplanets: Degeneracies between Surface Colors and Geography

    Fujii, Yuka [NASA Goddard Institute for Space Studies, New York, NY 10025 (United States); Lustig-Yaeger, Jacob [Astronomy Department, University of Washington, Box 951580, Seattle, WA 98195 (United States); Cowan, Nicolas B., E-mail: [Department of Earth and Planetary Sciences, McGill University, Montreal, Quebec, H3A 0E8 (Canada)


    Unmixing the disk-integrated spectra of exoplanets provides hints about heterogeneous surfaces that we cannot directly resolve in the foreseeable future. It is particularly important for terrestrial planets with diverse surface compositions like Earth. Although previous work on unmixing the spectra of Earth from disk-integrated multi-band light curves appeared successful, we point out a mathematical degeneracy between the surface colors and their spatial distributions. Nevertheless, useful constraints on the spectral shape of individual surface types may be obtained from the premise that albedo is everywhere between 0 and 1. We demonstrate the degeneracy and the possible constraints using both mock data based on a toy model of Earth, as well as real observations of Earth. Despite the severe degeneracy, we are still able to recover an approximate albedo spectrum for an ocean. In general, we find that surfaces are easier to identify when they cover a large fraction of the planet and when their spectra approach zero or unity in certain bands.

  15. Rotational Spectral Unmixing of Exoplanets: Degeneracies Between Surface Colors and Geography

    Fujii, Yuka; Lustig-Yaeger, Jacob; Cowan, Nicolas B.


    Unmixing the disk-integrated spectra of exoplanets provides hints about heterogeneous surfaces that we cannot directly resolve in the foreseeable future. It is particularly important for terrestrial planets with diverse surface compositions like Earth. Although previous work on unmixing the spectra of Earth from disk-integrated multi-band light curves appeared successful, we point out a mathematical degeneracy between the surface colors and their spatial distributions. Nevertheless, useful constraints on the spectral shape of individual surface types may be obtained from the premise that albedo is everywhere between 0 and 1. We demonstrate the degeneracy and the possible constraints using both mock data based on a toy model of Earth, as well as real observations of Earth. Despite the severe degeneracy, we are still able to recover an approximate albedo spectrum for an ocean. In general, we find that surfaces are easier to identify when they cover a large fraction of the planet and when their spectra approach zero or unity in certain bands.

  16. UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario

    Eugeniusz Porada


    Full Text Available UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA, was used to model volatile organic compound (VOC receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases, industrial point sources (plastic-, polymer-, and metalworking manufactures, and in secondary sources (releases from water, sediments, and contaminated urban soil. The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution.

  17. Direct comparison of Fe-Cr unmixing characterization by atom probe tomography and small angle scattering

    Couturier, Laurent, E-mail: [Univ. Grenoble Alpes, SIMAP, F-38000 Grenoble (France); CNRS, SIMAP, F-38000 Grenoble (France); Department of Materials Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4 (Canada); De Geuser, Frédéric; Deschamps, Alexis [Univ. Grenoble Alpes, SIMAP, F-38000 Grenoble (France); CNRS, SIMAP, F-38000 Grenoble (France)


    The fine microstructure obtained by unmixing of a solid solution either by classical precipitation or spinodal decomposition is often characterized either by small angle scattering or atom probe tomography. This article shows that a common data analysis framework can be used to analyze data obtained from these two techniques. An example of the application of this common analysis is given for characterization of the unmixing of the Fe-Cr matrix of a 15-5 PH stainless steel during long-term ageing at 350 °C and 400 °C. A direct comparison of the Cr composition fluctuations amplitudes and characteristic lengths obtained with both techniques is made showing a quantitative agreement for the fluctuation amplitudes. The origin of the discrepancy remaining for the characteristic lengths is discussed. - Highlights: •Common analysis framework for atom probe tomography and small angle scattering •Comparison of same microstructural characteristics obtained using both techniques •Good correlation of Cr composition fluctuations amplitudes from both techniques •Good correlation of Cr composition fluctuations amplitudes with classic V parameter.

  18. (LMRG): Microscope Resolution, Objective Quality, Spectral Accuracy and Spectral Un-mixing

    Bayles, Carol J.; Cole, Richard W.; Eason, Brady; Girard, Anne-Marie; Jinadasa, Tushare; Martin, Karen; McNamara, George; Opansky, Cynthia; Schulz, Katherine; Thibault, Marc; Brown, Claire M.


    The second study by the LMRG focuses on measuring confocal laser scanning microscope (CLSM) resolution, objective lens quality, spectral imaging accuracy and spectral un-mixing. Affordable test samples for each aspect of the study were designed, prepared and sent to 116 labs from 23 countries across the globe. Detailed protocols were designed for the three tests and customized for most of the major confocal instruments being used by the study participants. One protocol developed for measuring resolution and objective quality was recently published in Nature Protocols (Cole, R. W., T. Jinadasa, et al. (2011). Nature Protocols 6(12): 1929–1941). The first study involved 3D imaging of sub-resolution fluorescent microspheres to determine the microscope point spread function. Results of the resolution studies as well as point spread function quality (i.e. objective lens quality) from 140 different objective lenses will be presented. The second study of spectral accuracy looked at the reflection of the laser excitation lines into the spectral detection in order to determine the accuracy of these systems to report back the accurate laser emission wavelengths. Results will be presented from 42 different spectral confocal systems. Finally, samples with double orange beads (orange core and orange coating) were imaged spectrally and the imaging software was used to un-mix fluorescence signals from the two orange dyes. Results from 26 different confocal systems will be summarized. Time will be left to discuss possibilities for the next LMRG study.

  19. Improving Land Use/Land Cover Classification by Integrating Pixel Unmixing and Decision Tree Methods

    Chao Yang


    Full Text Available Decision tree classification is one of the most efficient methods for obtaining land use/land cover (LULC information from remotely sensed imageries. However, traditional decision tree classification methods cannot effectively eliminate the influence of mixed pixels. This study aimed to integrate pixel unmixing and decision tree to improve LULC classification by removing mixed pixel influence. The abundance and minimum noise fraction (MNF results that were obtained from mixed pixel decomposition were added to decision tree multi-features using a three-dimensional (3D Terrain model, which was created using an image fusion digital elevation model (DEM, to select training samples (ROIs, and improve ROI separability. A Landsat-8 OLI image of the Yunlong Reservoir Basin in Kunming was used to test this proposed method. Study results showed that the Kappa coefficient and the overall accuracy of integrated pixel unmixing and decision tree method increased by 0.093% and 10%, respectively, as compared with the original decision tree method. This proposed method could effectively eliminate the influence of mixed pixels and improve the accuracy in complex LULC classifications.

  20. Spatial unmixing for environmental impact monitoring of mining using UAS and WV-2

    Delalieux, S.; Livens, S.; Goossens, M.; Reusen, I.; Tote, C.


    The three principal activities of the mineral resources mining industry - mining, mineral processing and metallurgical extraction - all produce waste. The environmental impact of these activities depends on many factors, in particular, the type of mining and the size of the operation. The effects of the mining (extraction) stage tend to be mainly local, associated with surface disturbance, the production of large amounts of solid waste material, and the spread of chemically reactive particulate matter to the atmosphere and hydrosphere. Many studies have shown the potential of remote sensing for environmental impact monitoring, e.g., [1]. However, its applicability has been limited due to the inherent spatial-spectral and temporal trade-off of most sensors. More recently, miniaturization of sensors makes it possible to capture color images from unmanned aerial systems (UAS) with a very high spatial resolution. In addition, the UAS can be deployed in a very flexible manner, allowing high temporal resolution imaging. More detailed spectral information is available from multispectral images, albeit at lower spatial resolution. Combining both types of images using image fusion can help to overcome the spatial-spectral trade-off and provide a new tool for more detailed monitoring of environmental impacts. Within the framework of the ImpactMin project, funded by the Framework Programme 7 of the European Commission, the objective of this study is to implement and apply the spatial unmixing algorithm, as proposed by [2], on images of the 'Vihovici Coal Mine' area, located in the Mostar Valley, Bosnia and Herzegovina. A WorldView2 (WV2) satellite image will be employed, which provides 8-band multispectral data at a spatial resolution of 2m. High spatial resolution images, obtained by a SmartPlanes UAS, will provide RGB data with 0.05m spatial resolution. The spatial unmixing technique is based on the idea that a linear mixing model can be used to perform the downscaling of

  1. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin


    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  2. Multilevel weighted least squares polynomial approximation

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren


    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  3. Polynomials in finite geometries and combinatorics

    Blokhuis, A.; Walker, K.


    It is illustrated how elementary properties of polynomials can be used to attack extremal problems in finite and euclidean geometry, and in combinatorics. Also a new result, related to the problem of neighbourly cylinders is presented.

  4. Polynomial analysis of ambulatory blood pressure measurements

    Zwinderman, A. H.; Cleophas, T. A.; Cleophas, T. J.; van der Wall, E. E.


    In normotensive subjects blood pressures follow a circadian rhythm. A circadian rhythm in hypertensive patients is less well established, and may be clinically important, particularly with rigorous treatments of daytime blood pressures. Polynomial analysis of ambulatory blood pressure monitoring

  5. Handbook on semidefinite, conic and polynomial optimization

    Anjos, Miguel F


    This book offers the reader a snapshot of the state-of-the-art in the growing and mutually enriching areas of semidefinite optimization, conic optimization and polynomial optimization. It covers theory, algorithms, software and applications.

  6. Transversals of Complex Polynomial Vector Fields

    Dias, Kealey

    Vector fields in the complex plane are defined by assigning the vector determined by the value P(z) to each point z in the complex plane, where P is a polynomial of one complex variable. We consider special families of so-called rotated vector fields that are determined by a polynomial multiplied...... by rotational constants. Transversals are a certain class of curves for such a family of vector fields that represent the bifurcation states for this family of vector fields. More specifically, transversals are curves that coincide with a homoclinic separatrix for some rotation of the vector field. Given...... a concrete polynomial, it seems to take quite a bit of work to prove that it is generic, i.e. structurally stable. This has been done for a special class of degree d polynomial vector fields having simple equilibrium points at the d roots of unity, d odd. In proving that such vector fields are generic...

  7. Generalized catalan numbers, sequences and polynomials

    KOÇ, Cemal; GÜLOĞLU, İsmail; ESİN, Songül


    In this paper we present an algebraic interpretation for generalized Catalan numbers. We describe them as dimensions of certain subspaces of multilinear polynomials. This description is of utmost importance in the investigation of annihilators in exterior algebras.

  8. Schur Stability Regions for Complex Quadratic Polynomials

    Cheng, Sui Sun; Huang, Shao Yuan


    Given a quadratic polynomial with complex coefficients, necessary and sufficient conditions are found in terms of the coefficients such that all its roots have absolute values less than 1. (Contains 3 figures.)

  9. A Bivariate return period for levee failure monitoring

    Isola, M.; Caporali, E.


    Levee breaches are strongly linked with the interaction processes among water, soil and structure, thus many are the factors that affect the breach development. One of the main is the hydraulic load, characterized by intensity and duration, i.e. by the flood event hydrograph. On the magnitude of the hydraulic load is based the levee design, generally without considering the fatigue failure due to the load duration. Moreover, many are the cases in which the levee breach are characterized by flood of magnitude lower than the design one. In order to implement the strategies of flood risk management, we built here a procedure based on a multivariate statistical analysis of flood peak and volume together with the analysis of the past levee failure events. Particularly, in order to define the probability of occurrence of the hydraulic load on a levee, a bivariate copula model is used to obtain the bivariate joint distribution of flood peak and volume. Flood peak is the expression of the load magnitude, while the volume is the expression of the stress over time. We consider the annual flood peak and the relative volume. The volume is given by the hydrograph area between the beginning and the end of event. The beginning of the event is identified as an abrupt rise of the discharge by more than 20%. The end is identified as the point from which the receding limb is characterized by the baseflow, using a nonlinear reservoir algorithm as baseflow separation technique. By this, with the aim to define warning thresholds we consider the past levee failure events and the relative bivariate return period (BTr) compared with the estimation of a traditional univariate model. The discharge data of 30 hydrometric stations of Arno River in Tuscany, Italy, in the period 1995-2016 are analysed. The database of levee failure events, considering for each event the location as well as the failure mode, is also created. The events were registered in the period 2000-2014 by EEA

  10. Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.

    Vetter, Thomas R; Mascha, Edward J


    Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The

  11. About the solvability of matrix polynomial equations

    Netzer, Tim; Thom, Andreas


    We study self-adjoint matrix polynomial equations in a single variable and prove existence of self-adjoint solutions under some assumptions on the leading form. Our main result is that any self-adjoint matrix polynomial equation of odd degree with non-degenerate leading form can be solved in self-adjoint matrices. We also study equations of even degree and equations in many variables.

  12. Two polynomial representations of experimental design

    Notari, Roberto; Riccomagno, Eva; Rogantin, Maria-Piera


    In the context of algebraic statistics an experimental design is described by a set of polynomials called the design ideal. This, in turn, is generated by finite sets of polynomials. Two types of generating sets are mostly used in the literature: Groebner bases and indicator functions. We briefly describe them both, how they are used in the analysis and planning of a design and how to switch between them. Examples include fractions of full factorial designs and designs for mixture experiments.

  13. Rotation of 2D orthogonal polynomials

    Yang, B.; Flusser, Jan; Kautský, J.


    Roč. 102, č. 1 (2018), s. 44-49 ISSN 0167-8655 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Rotation invariants * Orthogonal polynomials * Recurrent relation * Hermite-like polynomials * Hermite moments Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.995, year: 2016

  14. Stability analysis of polynomial fuzzy models via polynomial fuzzy Lyapunov functions

    Bernal Reza, Miguel Ángel; Sala, Antonio; JAADARI, ABDELHAFIDH; Guerra, Thierry-Marie


    In this paper, the stability of continuous-time polynomial fuzzy models by means of a polynomial generalization of fuzzy Lyapunov functions is studied. Fuzzy Lyapunov functions have been fruitfully used in the literature for local analysis of Takagi-Sugeno models, a particular class of the polynomial fuzzy ones. Based on a recent Taylor-series approach which allows a polynomial fuzzy model to exactly represent a nonlinear model in a compact set of the state space, it is shown that a refinemen...

  15. Vertex models, TASEP and Grothendieck polynomials

    Motegi, Kohei; Sakai, Kazumitsu


    We examine the wavefunctions and their scalar products of a one-parameter family of integrable five-vertex models. At a special point of the parameter, the model investigated is related to an irreversible interacting stochastic particle system—the so-called totally asymmetric simple exclusion process (TASEP). By combining the quantum inverse scattering method with a matrix product representation of the wavefunctions, the on-/off-shell wavefunctions of the five-vertex models are represented as a certain determinant form. Up to some normalization factors, we find that the wavefunctions are given by Grothendieck polynomials, which are a one-parameter deformation of Schur polynomials. Introducing a dual version of the Grothendieck polynomials, and utilizing the determinant representation for the scalar products of the wavefunctions, we derive a generalized Cauchy identity satisfied by the Grothendieck polynomials and their duals. Several representation theoretical formulae for the Grothendieck polynomials are also presented. As a byproduct, the relaxation dynamics such as Green functions for the periodic TASEP are found to be described in terms of the Grothendieck polynomials. (paper)

  16. Many-body orthogonal polynomial systems

    Witte, N.S.


    The fundamental methods employed in the moment problem, involving orthogonal polynomial systems, the Lanczos algorithm, continued fraction analysis and Pade approximants has been combined with a cumulant approach and applied to the extensive many-body problem in physics. This has yielded many new exact results for many-body systems in the thermodynamic limit - for the ground state energy, for excited state gaps, for arbitrary ground state avenges - and are of a nonperturbative nature. These results flow from a confluence property of the three-term recurrence coefficients arising and define a general class of many-body orthogonal polynomials. These theorems constitute an analytical solution to the Lanczos algorithm in that they are expressed in terms of the three-term recurrence coefficients α and β. These results can also be applied approximately for non-solvable models in the form of an expansion, in a descending series of the system size. The zeroth order order this expansion is just the manifestation of the central limit theorem in which a Gaussian measure and hermite polynomials arise. The first order represents the first non-trivial order, in which classical distribution functions like the binomial distributions arise and the associated class of orthogonal polynomials are Meixner polynomials. Amongst examples of systems which have infinite order in the expansion are q-orthogonal polynomials where q depends on the system size in a particular way. (author)

  17. SNPMClust: Bivariate Gaussian Genotype Clustering and Calling for Illumina Microarrays

    Stephen W. Erickson


    Full Text Available SNPMClust is an R package for genotype clustering and calling with Illumina microarrays. It was originally developed for studies using the GoldenGate custom genotyping platform but can be used with other Illumina platforms, including Infinium BeadChip. The algorithm first rescales the fluorescent signal intensity data, adds empirically derived pseudo-data to minor allele genotype clusters, then uses the package mclust for bivariate Gaussian model fitting. We compared the accuracy and sensitivity of SNPMClust to that of GenCall, Illumina's proprietary algorithm, on a data set of 94 whole-genome amplified buccal (cheek swab DNA samples. These samples were genotyped on a custom panel which included 1064 SNPs for which the true genotype was known with high confidence. SNPMClust produced uniformly lower false call rates over a wide range of overall call rates.

  18. Efficient estimation of semiparametric copula models for bivariate survival data

    Cheng, Guang


    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  19. Selection effects in the bivariate brightness distribution for spiral galaxies

    Phillipps, S.; Disney, M.


    The joint distribution of total luminosity and characteristic surface brightness (the bivariate brightness distribution) is investigated for a complete sample of spiral galaxies in the Virgo cluster. The influence of selection and physical limits of various kinds on the apparent distribution are detailed. While the distribution of surface brightness for bright galaxies may be genuinely fairly narrow, faint galaxies exist right across the (quite small) range of accessible surface brightnesses so no statement can be made about the true extent of the distribution. The lack of high surface brightness bright galaxies in the Virgo sample relative to an overall RC2 sample (mostly field galaxies) supports the contention that the star-formation rate is reduced in the inner region of the cluster for environmental reasons. (author)

  20. Target Transformation Constrained Sparse Unmixing (ttcsu) Algorithm for Retrieving Hydrous Minerals on Mars: Application to Southwest Melas Chasma

    Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.


    Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.

  1. Relations between Möbius and coboundary polynomials

    Jurrius, R.P.M.J.


    It is known that, in general, the coboundary polynomial and the Möbius polynomial of a matroid do not determine each other. Less is known about more specific cases. In this paper, we will investigate if it is possible that the Möbius polynomial of a matroid, together with the Möbius polynomial of

  2. Special polynomials associated with rational solutions of some hierarchies

    Kudryashov, Nikolai A.


    New special polynomials associated with rational solutions of the Painleve hierarchies are introduced. The Hirota relations for these special polynomials are found. Differential-difference hierarchies to find special polynomials are presented. These formulae allow us to search special polynomials associated with the hierarchies. It is shown that rational solutions of the Caudrey-Dodd-Gibbon, the Kaup-Kupershmidt and the modified hierarchy for these ones can be obtained using new special polynomials.

  3. On the Connection Coefficients of the Chebyshev-Boubaker Polynomials

    Paul Barry


    Full Text Available The Chebyshev-Boubaker polynomials are the orthogonal polynomials whose coefficient arrays are defined by ordinary Riordan arrays. Examples include the Chebyshev polynomials of the second kind and the Boubaker polynomials. We study the connection coefficients of this class of orthogonal polynomials, indicating how Riordan array techniques can lead to closed-form expressions for these connection coefficients as well as recurrence relations that define them.

  4. New polynomial-based molecular descriptors with low degeneracy.

    Matthias Dehmer

    Full Text Available In this paper, we introduce a novel graph polynomial called the 'information polynomial' of a graph. This graph polynomial can be derived by using a probability distribution of the vertex set. By using the zeros of the obtained polynomial, we additionally define some novel spectral descriptors. Compared with those based on computing the ordinary characteristic polynomial of a graph, we perform a numerical study using real chemical databases. We obtain that the novel descriptors do have a high discrimination power.

  5. Unmixing demonstration with a twist: A photochromic Taylor-Couette device

    Fonda, Enrico; Sreenivasan, Katepalli R.


    10.1119/1.4996901.1 This article describes an updated version of the famous Taylor-Couette flow reversibility demonstration. The viscous fluid confined between two concentric cylinders is forced to move by the rotating inner cylinder and visualized through the transparent outer cylinder. After a few rotations, a colored blob of fluid appears well mixed. Yet, after reversing the motion for the same number of turns, the blob reappears in the original location as if the fluid has just been unmixed. The use of household supplies makes the device inexpensive and easy to build without specific technical skills. The device can be used for demonstrations in fluid dynamics courses and outreach activities to discuss the concepts of viscosity, creeping flows, the absence of inertia, and time-reversibility.

  6. A new class of generalized polynomials associated with Hermite and Bernoulli polynomials

    M. A. Pathan


    Full Text Available In this paper, we introduce a new class of generalized  polynomials associated with  the modified Milne-Thomson's polynomials Φ_{n}^{(α}(x,ν of degree n and order α introduced by  Derre and Simsek.The concepts of Bernoulli numbers B_n, Bernoulli polynomials  B_n(x, generalized Bernoulli numbers B_n(a,b, generalized Bernoulli polynomials  B_n(x;a,b,c of Luo et al, Hermite-Bernoulli polynomials  {_HB}_n(x,y of Dattoli et al and {_HB}_n^{(α} (x,y of Pathan  are generalized to the one   {_HB}_n^{(α}(x,y,a,b,c which is called  the generalized  polynomial depending on three positive real parameters. Numerous properties of these polynomials and some relationships between B_n, B_n(x, B_n(a,b, B_n(x;a,b,c and {}_HB_n^{(α}(x,y;a,b,c  are established. Some implicit summation formulae and general symmetry identities are derived by using different analytical means and applying generating functions. These results extend some known summations and identities of generalized Bernoulli numbers and polynomials

  7. Best polynomial degree reduction on q-lattices with applications to q-orthogonal polynomials

    Ait-Haddou, Rachid; Goldman, Ron


    We show that a weighted least squares approximation of q-Bézier coefficients provides the best polynomial degree reduction in the q-L2-norm. We also provide a finite analogue of this result with respect to finite q-lattices and we present applications of these results to q-orthogonal polynomials. © 2015 Elsevier Inc. All rights reserved.

  8. Certain non-linear differential polynomials sharing a non zero polynomial

    Majumder Sujoy


    functions sharing a nonzero polynomial and obtain two results which improves and generalizes the results due to L. Liu [Uniqueness of meromorphic functions and differential polynomials, Comput. Math. Appl., 56 (2008, 3236-3245.] and P. Sahoo [Uniqueness and weighted value sharing of meromorphic functions, Applied. Math. E-Notes., 11 (2011, 23-32.].

  9. Best polynomial degree reduction on q-lattices with applications to q-orthogonal polynomials

    Ait-Haddou, Rachid


    We show that a weighted least squares approximation of q-Bézier coefficients provides the best polynomial degree reduction in the q-L2-norm. We also provide a finite analogue of this result with respect to finite q-lattices and we present applications of these results to q-orthogonal polynomials. © 2015 Elsevier Inc. All rights reserved.


    G. Palubinskas


    Full Text Available Model based analysis or explicit definition/listing of all models/assumptions used in the derivation of a pan-sharpening method allows us to understand the rationale or properties of existing methods and shows a way for a proper usage or proposal/selection of new methods ‘better’ satisfying the needs of a particular application. Most existing pan-sharpening methods are based mainly on the two models/assumptions: spectral consistency for high resolution multispectral data (physical relationship between multispectral and panchromatic data in a high resolution scale and spatial consistency for multispectral data (so-called Wald’s protocol first property or relationship between multispectral data in different resolution scales. Two methods, one based on a linear unmixing model and another one based on spatial unmixing, are described/proposed/modified which respect models assumed and thus can produce correct or physically justified fusion results. Earlier mentioned property ‘better’ should be measurable quantitatively, e.g. by means of so-called quality measures. The difficulty of a quality assessment task in multi-resolution image fusion or pan-sharpening is that a reference image is missing. Existing measures or so-called protocols are still not satisfactory because quite often the rationale or assumptions used are not valid or not fulfilled. From a model based view it follows naturally that a quality assessment measure can be defined as a combination of error model residuals using common or general models assumed in all fusion methods. Thus in this paper a comparison of the two earlier proposed/modified pan-sharpening methods is performed. Preliminary experiments based on visual analysis are carried out in the urban area of Munich city for optical remote sensing multispectral data and panchromatic imagery of the WorldView-2 satellite sensor.

  11. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.


    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  12. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer


    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  13. Vortices and polynomials: non-uniqueness of the Adler–Moser polynomials for the Tkachenko equation

    Demina, Maria V; Kudryashov, Nikolai A


    Stationary and translating relative equilibria of point vortices in the plane are studied. It is shown that stationary equilibria of any system containing point vortices with arbitrary choice of circulations can be described with the help of the Tkachenko equation. It is also obtained that translating relative equilibria of point vortices with arbitrary circulations can be constructed using a generalization of the Tkachenko equation. Roots of any pair of polynomials solving the Tkachenko equation and the generalized Tkachenko equation are proved to give positions of point vortices in stationary and translating relative equilibria accordingly. These results are valid even if the polynomials in a pair have multiple or common roots. It is obtained that the Adler–Moser polynomial provides non-unique polynomial solutions of the Tkachenko equation. It is shown that the generalized Tkachenko equation possesses polynomial solutions with degrees that are not triangular numbers. (paper)

  14. Global sensitivity analysis by polynomial dimensional decomposition

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)


    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  15. Remarks on determinants and the classical polynomials

    Henning, J.J.; Kranold, H.U.; Louw, D.F.B.


    As motivation for this formal analysis the problem of Landau damping of Bernstein modes is discussed. It is shown that in the case of a weak but finite constant external magnetic field, the analytical structure of the dispersion relations is of such a nature that longitudinal waves propagating orthogonal to the external magnetic field are also damped, contrary to normal belief. In the treatment of the linearized Vlasov equation it is found convenient to generate certain polynomials by the problem at hand and to explicitly write down expressions for these polynomials. In the course of this study methods are used that relate to elementary but fairly unknown functional relationships between power sums and coefficients of polynomials. These relationships, also called Waring functions, are derived. They are then used in other applications to give explicit expressions for the generalized Laguerre polynomials in terms of determinant functions. The properties of polynomials generated by a wide class of generating functions are investigated. These relationships are also used to obtain explicit forms for the cumulants of a distribution in terms of its moments. It is pointed out that cumulants (or moments, for that matter) do not determine a distribution function

  16. Multilevel weighted least squares polynomial approximation

    Haji-Ali, Abdul-Lateef


    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  17. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    Rahimi, A.; Zhang, L.


    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further

  18. Bivariate Genomic Footprinting Detects Changes in Transcription Factor Activity

    Songjoon Baek


    Full Text Available In response to activating signals, transcription factors (TFs bind DNA and regulate gene expression. TF binding can be measured by protection of the bound sequence from DNase digestion (i.e., footprint. Here, we report that 80% of TF binding motifs do not show a measurable footprint, partly because of a variable cleavage pattern within the motif sequence. To more faithfully portray the effect of TFs on chromatin, we developed an algorithm that captures two TF-dependent effects on chromatin accessibility: footprinting and motif-flanking accessibility. The algorithm, termed bivariate genomic footprinting (BaGFoot, efficiently detects TF activity. BaGFoot is robust to different accessibility assays (DNase-seq, ATAC-seq, all examined peak-calling programs, and a variety of cut bias correction approaches. BaGFoot reliably predicts TF binding and provides valuable information regarding the TFs affecting chromatin accessibility in various biological systems and following various biological events, including in cases where an absolute footprint cannot be determined.

  19. Preparation and bivariate analysis of suspensions of human chromosomes

    van den Engh, G.J.; Trask, B.J.; Gray, J.W.; Langlois, R.G.; Yu, L.C.


    Chromosomes were isolated from a variety of human cell types using a HEPES-buffered hypotonic solution (pH 8.0) containing KCl, MgSO/sub 4/ dithioerythritol, and RNase. The chromosomes isolated by this procedure could be stained with a variety of fluorescent stains including propidium iodide, chromomycin A3, and Hoeschst 33258. Addition of sodium citrate to the stained chromosomes was found to improve the total fluorescence resolution. High-quality bivariate Hoeschst vs. chromomycin fluorescence distributions were obtained for chromosomes isolated from a human fibroblast cell strain, a human colon carcinoma cell line, and human peripheral blood lymphocyte cultures. Good flow karyotypes were also obtained from primary amniotic cell cultures. The Hoeschst vs. chromomycin flow karyotypes of a given cell line, made at different times and at dye concentrations varying over fourfold ranges, show little variation in the relative peak positions of the chromosomes. The size of the DNA in chromosomes isolated using this procedure ranges from 20 to 50 kilobases. The described isolation procedure is simple, it yields high-quality flow karyotypes, and it can be used to prepare chromosomes from clinical samples. 22 references, 7 figures, 1 table.

  20. Epileptic seizure prediction based on a bivariate spectral power methodology.

    Bandarabadi, Mojtaba; Teixeira, Cesar A; Direito, Bruno; Dourado, Antonio


    The spectral power of 5 frequently considered frequency bands (Alpha, Beta, Gamma, Theta and Delta) for 6 EEG channels is computed and then all the possible pairwise combinations among the 30 features set, are used to create a 435 dimensional feature space. Two new feature selection methods are introduced to choose the best candidate features among those and to reduce the dimensionality of this feature space. The selected features are then fed to Support Vector Machines (SVMs) that classify the cerebral state in preictal and non-preictal classes. The outputs of the SVM are regularized using a method that accounts for the classification dynamics of the preictal class, also known as "Firing Power" method. The results obtained using our feature selection approaches are compared with the ones obtained using minimum Redundancy Maximum Relevance (mRMR) feature selection method. The results in a group of 12 patients of the EPILEPSIAE database, containing 46 seizures and 787 hours multichannel recording for out-of-sample data, indicate the efficiency of the bivariate approach as well as the two new feature selection methods. The best results presented sensitivity of 76.09% (35 of 46 seizures predicted) and a false prediction rate of 0.15(-1).

  1. Bivariate Cointegration Analysis of Energy-Economy Interactions in Iran

    Ismail Oladimeji Soile


    Full Text Available Fixing the prices of energy products below their opportunity cost for welfare and redistribution purposes is common with governments of many oil producing developing countries. This has often resulted in huge energy consumption in developing countries and the question that emerge is whether this increased energy consumption results in higher economic activities. Available statistics show that Iran’s economy growth shrunk for the first time in two decades from 2011 amidst the introduction of pricing reform in 2010 and 2014 suggesting a relationship between energy use and economic growth. Accordingly, the study examined the causality and the likelihood of a long term relationship between energy and economic growth in Iran. Unlike previous studies which have focused on the effects and effectiveness of the reform, the paper investigates the rationale for the reform. The study applied a bivariate cointegration time series econometric approach. The results reveals a one-way causality running from economic growth to energy with no feedback with evidence of long run connection. The implication of this is that energy conservation policy is not inimical to economic growth. This evidence lend further support for the ongoing subsidy reforms in Iran as a measure to check excessive and inefficient use of energy.

  2. A bivariate optimal replacement policy for a multistate repairable system

    Zhang Yuanlin; Yam, Richard C.M.; Zuo, Ming J.


    In this paper, a deteriorating simple repairable system with k+1 states, including k failure states and one working state, is studied. It is assumed that the system after repair is not 'as good as new' and the deterioration of the system is stochastic. We consider a bivariate replacement policy, denoted by (T,N), in which the system is replaced when its working age has reached T or the number of failures it has experienced has reached N, whichever occurs first. The objective is to determine the optimal replacement policy (T,N)* such that the long-run expected profit per unit time is maximized. The explicit expression of the long-run expected profit per unit time is derived and the corresponding optimal replacement policy can be determined analytically or numerically. We prove that the optimal policy (T,N)* is better than the optimal policy N* for a multistate simple repairable system. We also show that a general monotone process model for a multistate simple repairable system is equivalent to a geometric process model for a two-state simple repairable system in the sense that they have the same structure for the long-run expected profit (or cost) per unit time and the same optimal policy. Finally, a numerical example is given to illustrate the theoretical results

  3. Polynomial chaos functions and stochastic differential equations

    Williams, M.M.R.


    The Karhunen-Loeve procedure and the associated polynomial chaos expansion have been employed to solve a simple first order stochastic differential equation which is typical of transport problems. Because the equation has an analytical solution, it provides a useful test of the efficacy of polynomial chaos. We find that the convergence is very rapid in some cases but that the increased complexity associated with many random variables can lead to very long computational times. The work is illustrated by exact and approximate solutions for the mean, variance and the probability distribution itself. The usefulness of a white noise approximation is also assessed. Extensive numerical results are given which highlight the weaknesses and strengths of polynomial chaos. The general conclusion is that the method is promising but requires further detailed study by application to a practical problem in transport theory

  4. Minimal residual method stronger than polynomial preconditioning

    Faber, V.; Joubert, W.; Knill, E. [Los Alamos National Lab., NM (United States)] [and others


    Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.

  5. Fast beampattern evaluation by polynomial rooting

    Häcker, P.; Uhlich, S.; Yang, B.


    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  6. Twisted Polynomials and Forgery Attacks on GCM

    Abdelraheem, Mohamed Ahmed A. M. A.; Beelen, Peter; Bogdanov, Andrey


    Polynomial hashing as an instantiation of universal hashing is a widely employed method for the construction of MACs and authenticated encryption (AE) schemes, the ubiquitous GCM being a prominent example. It is also used in recent AE proposals within the CAESAR competition which aim at providing...... in an improved key recovery algorithm. As cryptanalytic applications of our twisted polynomials, we develop the first universal forgery attacks on GCM in the weak-key model that do not require nonce reuse. Moreover, we present universal weak-key forgeries for the nonce-misuse resistant AE scheme POET, which...

  7. Polynomial Vector Fields in One Complex Variable

    Branner, Bodil

    In recent years Adrien Douady was interested in polynomial vector fields, both in relation to iteration theory and as a topic on their own. This talk is based on his work with Pierrette Sentenac, work of Xavier Buff and Tan Lei, and my own joint work with Kealey Dias.......In recent years Adrien Douady was interested in polynomial vector fields, both in relation to iteration theory and as a topic on their own. This talk is based on his work with Pierrette Sentenac, work of Xavier Buff and Tan Lei, and my own joint work with Kealey Dias....

  8. The chromatic polynomial and list colorings

    Thomassen, Carsten


    We prove that, if a graph has a list of k available colors at every vertex, then the number of list-colorings is at least the chromatic polynomial evaluated at k when k is sufficiently large compared to the number of vertices of the graph.......We prove that, if a graph has a list of k available colors at every vertex, then the number of list-colorings is at least the chromatic polynomial evaluated at k when k is sufficiently large compared to the number of vertices of the graph....

  9. Complex centers of polynomial differential equations

    Mohamad Ali M. Alwash


    Full Text Available We present some results on the existence and nonexistence of centers for polynomial first order ordinary differential equations with complex coefficients. In particular, we show that binomial differential equations without linear terms do not have complex centers. Classes of polynomial differential equations, with more than two terms, are presented that do not have complex centers. We also study the relation between complex centers and the Pugh problem. An algorithm is described to solve the Pugh problem for equations without complex centers. The method of proof involves phase plane analysis of the polar equations and a local study of periodic solutions.

  10. Differential recurrence formulae for orthogonal polynomials

    Anton L. W. von Bachhaus


    Full Text Available Part I - By combining a general 2nd-order linear homogeneous ordinary differential equation with the three-term recurrence relation possessed by all orthogonal polynomials, it is shown that sequences of orthogonal polynomials which satisfy a differential equation of the above mentioned type necessarily have a differentiation formula of the type: gn(xY'n(x=fn(xYn(x+Yn-1(x. Part II - A recurrence formula of the form: rn(xY'n(x+sn(xY'n+1(x+tn(xY'n-1(x=0, is derived using the result of Part I.

  11. Polynomial regression analysis and significance test of the regression function

    Gao Zhengming; Zhao Juan; He Shengping


    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  12. Asymptotics of bivariate generating functions with algebraic singularities

    Greenwood, Torin

    Flajolet and Odlyzko (1990) derived asymptotic formulae the coefficients of a class of uni- variate generating functions with algebraic singularities. Gao and Richmond (1992) and Hwang (1996, 1998) extended these results to classes of multivariate generating functions, in both cases by reducing to the univariate case. Pemantle and Wilson (2013) outlined new multivariate ana- lytic techniques and used them to analyze the coefficients of rational generating functions. After overviewing these methods, we use them to find asymptotic formulae for the coefficients of a broad class of bivariate generating functions with algebraic singularities. Beginning with the Cauchy integral formula, we explicity deform the contour of integration so that it hugs a set of critical points. The asymptotic contribution to the integral comes from analyzing the integrand near these points, leading to explicit asymptotic formulae. Next, we use this formula to analyze an example from current research. In the following chapter, we apply multivariate analytic techniques to quan- tum walks. Bressler and Pemantle (2007) found a (d + 1)-dimensional rational generating function whose coefficients described the amplitude of a particle at a position in the integer lattice after n steps. Here, the minimal critical points form a curve on the (d + 1)-dimensional unit torus. We find asymptotic formulae for the amplitude of a particle in a given position, normalized by the number of steps n, as n approaches infinity. Each critical point contributes to the asymptotics for a specific normalized position. Using Groebner bases in Maple again, we compute the explicit locations of peak amplitudes. In a scaling window of size the square root of n near the peaks, each amplitude is asymptotic to an Airy function.

  13. Multiplex protein pattern unmixing using a non-linear variable-weighted support vector machine as optimized by a particle swarm optimization algorithm.

    Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin


    Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Multi-tissue partial volume quantification in multi-contrast MRI using an optimised spectral unmixing approach.

    Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme


    Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    Masalmah, Yahya M.; Vélez-Reyes, Miguel


    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  16. Top local cohomology and the catenary of the unmixed part of support of a finitely generated module

    Nguyen Tu Cuong; Nguyen Thi Dung; Le Thanh Nhan


    Let (R,m) be a Noetherian local ring and M a finitely generated R-module with dim M = d. This paper is concerned with the following property for the top local cohomology H m d (M): Ann R (0: H m d (M) p) = p for all prime ideals p is a subset of Ann R H m d ( M). It is shown that this property is equivalent to the catenary of the unmixed part Supp M/U M (0) of the support of M, where U M (0) is the largest submodule of M of dimension less than d. Some characterizations of this property in terms of systems of parameters and relations between the unmixed parts of Supp M and Supp M-circumflex are given. A connection to the so-called co-localization is discussed. (author)

  17. Nonclassical Orthogonal Polynomials and Corresponding Quadratures

    Fukuda, H; Alt, E O; Matveenko, A V


    We construct nonclassical orthogonal polynomials and calculate abscissas and weights of Gaussian quadrature for arbitrary weight and interval. The program is written by Mathematica and it works if moment integrals are given analytically. The result is a FORTRAN subroutine ready to utilize the quadrature.

  18. Intrinsic Diophantine approximation on general polynomial surfaces

    Tiljeset, Morten Hein


    We study the Hausdorff measure and dimension of the set of intrinsically simultaneously -approximable points on a curve, surface, etc, given as a graph of integer polynomials. We obtain complete answers to these questions for algebraically “nice” manifolds. This generalizes earlier work done...

  19. Quantum Hilbert matrices and orthogonal polynomials

    Andersen, Jørgen Ellegaard; Berg, Christian


    Using the notion of quantum integers associated with a complex number q≠0 , we define the quantum Hilbert matrix and various extensions. They are Hankel matrices corresponding to certain little q -Jacobi polynomials when |q|<1 , and for the special value they are closely related to Hankel matrice...

  20. Algebraic polynomial system solving and applications

    Bleylevens, I.W.M.


    The problem of computing the solutions of a system of multivariate polynomial equations can be approached by the Stetter-Möller matrix method which casts the problem into a large eigenvalue problem. This Stetter-Möller matrix method forms the starting point for the development of computational

  1. Information-theoretic lengths of Jacobi polynomials

    Guerrero, A; Dehesa, J S [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, Granada (Spain); Sanchez-Moreno, P, E-mail: agmartinez@ugr.e, E-mail: pablos@ugr.e, E-mail: dehesa@ugr.e [Instituto ' Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain)


    The information-theoretic lengths of the Jacobi polynomials P{sup ({alpha}, {beta})}{sub n}(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters ({alpha}, {beta}). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.

  2. Indecomposability of polynomials via Jacobian matrix

    Cheze, G.; Najib, S.


    Uni-multivariate decomposition of polynomials is a special case of absolute factorization. Recently, thanks to the Ruppert's matrix some effective results about absolute factorization have been improved. Here we show that with a jacobian matrix we can get sharper bounds for the special case of uni-multivariate decomposition. (author)

  3. On selfadjoint functors satisfying polynomial relations

    Agerholm, Troels; Mazorchuk, Volodomyr


    We study selfadjoint functors acting on categories of finite dimen- sional modules over finite dimensional algebras with an emphasis on functors satisfying some polynomial relations. Selfadjoint func- tors satisfying several easy relations, in particular, idempotents and square roots of a sum...

  4. Polynomial Variables and the Jacobian Problem

    algebra and algebraic geometry, and ... algebraically, to making the change of variables (X, Y) r--t. (X +p, Y ... aX + bY + p and eX + dY + q are linear polynomials in X, Y. ..... [5] T T Moh, On the Jacobian conjecture and the confipration of roots,.

  5. Function approximation with polynomial regression slines

    Urbanski, P.


    Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)

  6. Polynomial stabilization of some dissipative hyperbolic systems

    Ammari, K.; Feireisl, Eduard; Nicaise, S.


    Roč. 34, č. 11 (2014), s. 4371-4388 ISSN 1078-0947 R&D Projects: GA ČR GA201/09/0917 Institutional support: RVO:67985840 Keywords : exponential stability * polynomial stability * observability inequality Subject RIV: BA - General Mathematics Impact factor: 0.826, year: 2014

  7. Polynomial Asymptotes of the Second Kind

    Dobbs, David E.


    This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and…

  8. Characteristic polynomials of linear polyacenes and their ...

    Coefficients of characteristic polynomials (CP) of linear polyacenes (LP) have been shown to be obtainable from Pascal's triangle by using a graph factorisation and squaring technique. Strong subspectrality existing among the members of the linear polyacene series has been shown from the derivation of the CP's. Thus it ...

  9. Coherent states for polynomial su(2) algebra

    Sadiq, Muhammad; Inomata, Akira


    A class of generalized coherent states is constructed for a polynomial su(2) algebra in a group-free manner. As a special case, the coherent states for the cubic su(2) algebra are discussed. The states so constructed reduce to the usual SU(2) coherent states in the linear limit

  10. Bernoulli Polynomials, Fourier Series and Zeta Numbers

    Scheufens, Ernst E


    Fourier series for Bernoulli polynomials are used to obtain information about values of the Riemann zeta function for integer arguments greater than one. If the argument is even we recover the well-known exact values, if the argument is odd we find integral representations and rapidly convergent...

  11. Euler Polynomials, Fourier Series and Zeta Numbers

    Scheufens, Ernst E


    Fourier series for Euler polynomials is used to obtain information about values of the Riemann zeta function for integer arguments greater than one. If the argument is even we recover the well-known exact values, if the argument is odd we find integral representations and rapidly convergent series....

  12. Automatic Control Systems Modeling by Volterra Polynomials

    S. V. Solodusha


    Full Text Available The problem of the existence of the solutions of polynomial Volterra integral equations of the first kind of the second degree is considered. An algorithm of the numerical solution of one class of Volterra nonlinear systems of the first kind is developed. Numerical results for test examples are presented.

  13. Spectral properties of birth-death polynomials

    van Doorn, Erik A.


    We consider sequences of polynomials that are defined by a three-terms recurrence relation and orthogonal with respect to a positive measure on the nonnegative axis. By a famous result of Karlin and McGregor such sequences are instrumental in the analysis of birth-death processes. Inspired by

  14. Spectral properties of birth-death polynomials

    van Doorn, Erik A.

    We consider sequences of polynomials that are defined by a three-terms recurrence relation and orthogonal with respect to a positive measure on the nonnegative axis. By a famous result of Karlin and McGregor such sequences are instrumental in the analysis of birth-death processes. Inspired by

  15. Optimization of Cubic Polynomial Functions without Calculus

    Taylor, Ronald D., Jr.; Hansen, Ryan


    In algebra and precalculus courses, students are often asked to find extreme values of polynomial functions in the context of solving an applied problem; but without the notion of derivative, something is lost. Either the functions are reduced to quadratics, since students know the formula for the vertex of a parabola, or solutions are…

  16. transformation of independent variables in polynomial regression ...


    preferable when possible to work with a simple functional form in transformed variables rather than with a more complicated form in the original variables. In this paper, it is shown that linear transformations applied to independent variables in polynomial regression models affect the t ratio and hence the statistical ...

  17. Inequalities for a Polynomial and its Derivative

    Annual Meetings · Mid Year Meetings · Discussion Meetings · Public Lectures · Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 110; Issue 2. Inequalities for a Polynomial and its Derivative. V K Jain. Volume 110 Issue 2 May 2000 pp 137- ...

  18. Integral Inequalities for Self-Reciprocal Polynomials

    Annual Meetings · Mid Year Meetings · Discussion Meetings · Public Lectures · Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 120; Issue 2. Integral Inequalities for Self-Reciprocal Polynomials. Horst Alzer. Volume 120 Issue 2 April 2010 ...

  19. Discrimination of Sedimentary Lithologies Through Unmixing of EO-1 Hyperion Data: Melville Island, Canadian High Arctic

    Leverington, D. W.


    The use of remote-sensing techniques in the discrimination of rock and soil classes in northern regions can help support a diverse range of activities including environmental characterization, mineral exploration, and the study of Quaternary paleoenvironments. Images of low spectral resolution can commonly be used in the mapping of lithological classes possessing distinct spectral characteristics, but hyperspectral databases offer greater potential for discrimination of materials distinguished by more subtle reflectance properties. Orbiting sensors offer an especially flexible and cost-effective means for acquisition of data to workers unable to conduct airborne surveys. In an effort to better constrain the utility of hyperspectral datasets in northern research, this study undertook to investigate the effectiveness of EO-1 Hyperion data in the discrimination and mapping of surface classes at a study area on Melville Island, Nunavut. Bedrock units in the immediate study area consist of late-Paleozoic clastic and carbonate sequences of the Sverdrup Basin. Weathered and frost-shattered felsenmeer, predominantly taking the form of boulder- to pebble-sized clasts that have accumulated in place and that mantle parent bedrock units, is the most common surface material in the study area. Hyperion data were converted from at-sensor radiance to reflectance, and were then linearly unmixed on the basis of end-member spectra measured from field samples. Hyperion unmixing results effectively portray the general fractional cover of six end members, although the fraction images of several materials contain background values that in some areas overestimate surface exposure. The best separated end members include the snow, green vegetation, and red-weathering sandstone classes, whereas the classes most negatively affected by elevated fraction values include the mudstone, limestone, and 'other' sandstone classes. Local overestimates of fractional cover are likely related to the

  20. Density of Real Zeros of the Tutte Polynomial

    Ok, Seongmin; Perrett, Thomas


    The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane. This ....... This is the first density result for the real zeros of the Tutte polynomial in a region of positive volume. Our result almost confirms a conjecture of Jackson and Sokal except for one region which is related to an open problem on flow polynomials.......The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane...

  1. Density of Real Zeros of the Tutte Polynomial

    Ok, Seongmin; Perrett, Thomas


    The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane. This ....... This is the first density result for the real zeros of the Tutte polynomial in a region of positive volume. Our result almost confirms a conjecture of Jackson and Sokal except for one region which is related to an open problem on flow polynomials.......The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane...

  2. Some Polynomials Associated with the r-Whitney Numbers


    Abstract. In the present article we study three families of polynomials associated with ... [29, 39] for their relations with the Bernoulli and generalized Bernoulli polynomials and ... generating functions in a similar way as in the classical cases.

  3. On an Inequality Concerning the Polar Derivative of a Polynomial

    Abstract. In this paper, we present a correct proof of an -inequality concerning the polar derivative of a polynomial with restricted zeros. We also extend Zygmund's inequality to the polar derivative of a polynomial.

  4. 2-variable Laguerre matrix polynomials and Lie-algebraic techniques

    Khan, Subuhi; Hassan, Nader Ali Makboul


    The authors introduce 2-variable forms of Laguerre and modified Laguerre matrix polynomials and derive their special properties. Further, the representations of the special linear Lie algebra sl(2) and the harmonic oscillator Lie algebra G(0,1) are used to derive certain results involving these polynomials. Furthermore, the generating relations for the ordinary as well as matrix polynomials related to these matrix polynomials are derived as applications.

  5. Algebraic limit cycles in polynomial systems of differential equations

    Llibre, Jaume; Zhao Yulin


    Using elementary tools we construct cubic polynomial systems of differential equations with algebraic limit cycles of degrees 4, 5 and 6. We also construct a cubic polynomial system of differential equations having an algebraic homoclinic loop of degree 3. Moreover, we show that there are polynomial systems of differential equations of arbitrary degree that have algebraic limit cycles of degree 3, as well as give an example of a cubic polynomial system of differential equations with two algebraic limit cycles of degree 4

  6. The generalized Yablonskii-Vorob'ev polynomials and their properties

    Kudryashov, Nikolai A.; Demina, Maria V.


    Rational solutions of the generalized second Painleve hierarchy are classified. Representation of the rational solutions in terms of special polynomials, the generalized Yablonskii-Vorob'ev polynomials, is introduced. Differential-difference relations satisfied by the polynomials are found. Hierarchies of differential equations related to the generalized second Painleve hierarchy are derived. One of these hierarchies is a sequence of differential equations satisfied by the generalized Yablonskii-Vorob'ev polynomials

  7. Polynomial selection in number field sieve for integer factorization

    Gireesh Pandey


    Full Text Available The general number field sieve (GNFS is the fastest algorithm for factoring large composite integers which is made up by two prime numbers. Polynomial selection is an important step of GNFS. The asymptotic runtime depends on choice of good polynomial pairs. In this paper, we present polynomial selection algorithm that will be modelled with size and root properties. The correlations between polynomial coefficient and number of relations have been explored with experimental findings.

  8. Contributions to fuzzy polynomial techniques for stability analysis and control

    Pitarch Pérez, José Luis


    The present thesis employs fuzzy-polynomial control techniques in order to improve the stability analysis and control of nonlinear systems. Initially, it reviews the more extended techniques in the field of Takagi-Sugeno fuzzy systems, such as the more relevant results about polynomial and fuzzy polynomial systems. The basic framework uses fuzzy polynomial models by Taylor series and sum-of-squares techniques (semidefinite programming) in order to obtain stability guarantees...

  9. Interlacing of zeros of quasi-orthogonal meixner polynomials | Driver ...

    ... interlacing of zeros of quasi-orthogonal Meixner polynomials Mn(x;β; c) with the zeros of their nearest orthogonal counterparts Mt(x;β + k; c), l; n ∈ ℕ, k ∈ {1; 2}; is also discussed. Mathematics Subject Classication (2010): 33C45, 42C05. Key words: Discrete orthogonal polynomials, quasi-orthogonal polynomials, Meixner

  10. Strong result for real zeros of random algebraic polynomials

    T. Uno


    Full Text Available An estimate is given for the lower bound of real zeros of random algebraic polynomials whose coefficients are non-identically distributed dependent Gaussian random variables. Moreover, our estimated measure of the exceptional set, which is independent of the degree of the polynomials, tends to zero as the degree of the polynomial tends to infinity.

  11. On the Lorentz degree of a product of polynomials

    Ait-Haddou, Rachid


    In this note, we negatively answer two questions of T. Erdélyi (1991, 2010) on possible lower bounds on the Lorentz degree of product of two polynomials. We show that the correctness of one question for degree two polynomials is a direct consequence of a result of Barnard et al. (1991) on polynomials with nonnegative coefficients.

  12. A Determinant Expression for the Generalized Bessel Polynomials

    Sheng-liang Yang


    Full Text Available Using the exponential Riordan arrays, we show that a variation of the generalized Bessel polynomial sequence is of Sheffer type, and we obtain a determinant formula for the generalized Bessel polynomials. As a result, the Bessel polynomial is represented as determinant the entries of which involve Catalan numbers.

  13. On the estimation of the degree of regression polynomial

    Toeroek, Cs.


    The mathematical functions most commonly used to model curvature in plots are polynomials. Generally, the higher the degree of the polynomial, the more complex is the trend that its graph can represent. We propose a new statistical-graphical approach based on the discrete projective transformation (DPT) to estimating the degree of polynomial that adequately describes the trend in the plot

  14. Zeros and uniqueness of Q-difference polynomials of meromorphic ...

    Meromorphic functions; Nevanlinna theory; logarithmic order; uniqueness problem; difference-differential polynomial. Abstract. In this paper, we investigate the value distribution of -difference polynomials of meromorphic function of finite logarithmic order, and study the zero distribution of difference-differential polynomials ...

  15. Uniqueness and zeros of q-shift difference polynomials

    In this paper, we consider the zero distributions of -shift difference polynomials of meromorphic functions with zero order, and obtain two theorems that extend the classical Hayman results on the zeros of differential polynomials to -shift difference polynomials. We also investigate the uniqueness problem of -shift ...

  16. Polynomially Riesz elements | Živković-Zlatanović | Quaestiones ...

    A Banach algebra element ɑ ∈ A is said to be "polynomially Riesz", relative to the homomorphism T : A → B, if there exists a nonzero complex polynomial p(z) such that the image Tp ∈ B is quasinilpotent. Keywords: Homomorphism of Banach algebras, polynomially Riesz element, Fredholm spectrum, Browder element, ...

  17. Multivariable biorthogonal continuous--discrete Wilson and Racah polynomials

    Tratnik, M.V.


    Several families of multivariable, biorthogonal, partly continuous and partly discrete, Wilson polynomials are presented. These yield limit cases that are purely continuous in some of the variables and purely discrete in the others, or purely discrete in all the variables. The latter are referred to as the multivariable biorthogonal Racah polynomials. Interesting further limit cases include the multivariable biorthogonal Hahn and dual Hahn polynomials

  18. Commutators with idempotent values on multilinear polynomials in ...

    Multilinear polynomial; derivations; generalized polynomial identity; prime ring; right ideal. Abstract. Let R be a prime ring of characteristic different from 2, C its extended centroid, d a nonzero derivation of R , f ( x 1 , … , x n ) a multilinear polynomial over C , ϱ a nonzero right ideal of R and m > 1 a fixed integer such that.

  19. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Gordon, Sheldon P.; Yang, Yajun


    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  20. Degenerate r-Stirling Numbers and r-Bell Polynomials

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.


    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  1. An Improved Unmixing-Based Fusion Method: Potential Application to Remote Monitoring of Inland Waters

    Yulong Guo


    Full Text Available Although remote sensing technology has been widely used to monitor inland water bodies; the lack of suitable data with high spatial and spectral resolution has severely obstructed its practical development. The objective of this study is to improve the unmixing-based fusion (UBF method to produce fused images that maintain both spectral and spatial information from the original images. Images from Environmental Satellite 1 (HJ1 and Medium Resolution Imaging Spectrometer (MERIS were used in this study to validate the method. An improved UBF (IUBF algorithm is established by selecting a proper HJ1-CCD image band for each MERIS band and thereafter applying an unsupervised classification method in each sliding window. Viewing in the visual sense—the radiance and the spectrum—the results show that the improved method effectively yields images with the spatial resolution of the HJ1-CCD image and the spectrum resolution of the MERIS image. When validated using two datasets; the ERGAS index (Relative Dimensionless Global Error indicates that IUBF is more robust than UBF. Finally, the fused data were applied to evaluate the chlorophyll a concentrations (Cchla in Taihu Lake. The result shows that the Cchla map obtained by IUBF fusion captures more detailed information than that of MERIS.

  2. Estimating the formation age distribution of continental crust by unmixing zircon ages

    Korenaga, Jun


    Continental crust provides first-order control on Earth's surface environment, enabling the presence of stable dry landmasses surrounded by deep oceans. The evolution of continental crust is important for atmospheric evolution, because continental crust is an essential component of deep carbon cycle and is likely to have played a critical role in the oxygenation of the atmosphere. Geochemical information stored in the mineral zircon, known for its resilience to diagenesis and metamorphism, has been central to ongoing debates on the genesis and evolution of continental crust. However, correction for crustal reworking, which is the most critical step when estimating original formation ages, has been incorrectly formulated, undermining the significance of previous estimates. Here I suggest a simple yet promising approach for reworking correction using the global compilation of zircon data. The present-day distribution of crustal formation age estimated by the new "unmixing" method serves as the lower bound to the true crustal growth, and large deviations from growth models based on mantle depletion imply the important role of crustal recycling through the Earth history.

  3. Spectral unmixing of urban land cover using a generic library approach

    Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben


    Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.

  4. Large level crossings of a random polynomial

    Kambiz Farahmand


    Full Text Available We know the expected number of times that a polynomial of degree n with independent random real coefficients asymptotically crosses the level K, when K is any real value such that (K2/n→0 as n→∞. The present paper shows that, when K is allowed to be large, this expected number of crossings reduces to only one. The coefficients of the polynomial are assumed to be normally distributed. It is shown that it is sufficient to let K≥exp(nf where f is any function of n such that f→∞ as n→∞.

  5. Sparse DOA estimation with polynomial rooting

    Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren


    Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...

  6. On factorization of generalized Macdonald polynomials

    Kononov, Ya.; Morozov, A.


    A remarkable feature of Schur functions - the common eigenfunctions of cut-and-join operators from W ∞ - is that they factorize at the peculiar two-parametric topological locus in the space of time variables, which is known as the hook formula for quantum dimensions of representations of U q (SL N ) and which plays a big role in various applications. This factorization survives at the level of Macdonald polynomials. We look for its further generalization to generalized Macdonald polynomials (GMPs), associated in the same way with the toroidal Ding-Iohara-Miki algebras, which play the central role in modern studies in Seiberg-Witten-Nekrasov theory. In the simplest case of the first-coproduct eigenfunctions, where GMP depend on just two sets of time variables, we discover a weak factorization - on a one- (rather than four-) parametric slice of the topological locus, which is already a very non-trivial property, calling for proof and better understanding. (orig.)

  7. Quantum Hurwitz numbers and Macdonald polynomials

    Harnad, J.


    Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.

  8. Polynomial chaos representation of databases on manifolds

    Soize, C., E-mail: [Université Paris-Est, Laboratoire Modélisation et Simulation Multi-Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-La-Vallée, Cedex 2 (France); Ghanem, R., E-mail: [University of Southern California, 210 KAP Hall, Los Angeles, CA 90089 (United States)


    Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. The method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.

  9. Polynomial structures in one-loop amplitudes

    Britto, Ruth; Feng Bo; Yang Gang


    A general one-loop scattering amplitude may be expanded in terms of master integrals. The coefficients of the master integrals can be obtained from tree-level input in a two-step process. First, use known formulas to write the coefficients of (4-2ε)-dimensional master integrals; these formulas depend on an additional variable, u, which encodes the dimensional shift. Second, convert the u-dependent coefficients of (4-2ε)-dimensional master integrals to explicit coefficients of dimensionally shifted master integrals. This procedure requires the initial formulas for coefficients to have polynomial dependence on u. Here, we give a proof of this property in the case of massless propagators. The proof is constructive. Thus, as a byproduct, we produce different algebraic expressions for the scalar integral coefficients, in which the polynomial property is apparent. In these formulas, the box and pentagon contributions are separated explicitly.

  10. Link polynomial, crossing multiplier and surgery formula

    Deguchi, Tetsuo; Yamada, Yasuhiko.


    Relations between link polynomials constructed from exactly solvable lattice models and topological field theory are reviewed. It is found that the surgery formula for a three-sphere S 3 with Wilson lines corresponds to the Markov trace constructed from the exactly solvable models. This indicates that knot theory intimately relates various important subjects such as exactly solvable models, conformal field theories and topological quantum field theories. (author)

  11. Completeness of the ring of polynomials

    Thorup, Anders


    Consider the polynomial ring R:=k[X1,…,Xn]R:=k[X1,…,Xn] in n≥2n≥2 variables over an uncountable field k. We prove that R   is complete in its adic topology, that is, the translation invariant topology in which the non-zero ideals form a fundamental system of neighborhoods of 0. In addition we pro...

  12. Moments, positive polynomials and their applications

    Lasserre, Jean Bernard


    Many important applications in global optimization, algebra, probability and statistics, applied mathematics, control theory, financial mathematics, inverse problems, etc. can be modeled as a particular instance of the Generalized Moment Problem (GMP) . This book introduces a new general methodology to solve the GMP when its data are polynomials and basic semi-algebraic sets. This methodology combines semidefinite programming with recent results from real algebraic geometry to provide a hierarchy of semidefinite relaxations converging to the desired optimal value. Applied on appropriate cones,

  13. Polynomials and identities on real Banach spaces

    Hájek, Petr Pavel; Kraus, M.


    Roč. 385, č. 2 (2012), s. 1015-1026 ISSN 0022-247X R&D Projects: GA ČR(CZ) GAP201/11/0345 Institutional research plan: CEZ:AV0Z10190503 Keywords : Polynomials on Banach spaces Subject RIV: BA - General Mathematics Impact factor: 1.050, year: 2012

  14. Eye aberration analysis with Zernike polynomials

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.


    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  15. A generalized right truncated bivariate Poisson regression model with applications to health data.

    Islam, M Ataharul; Chowdhury, Rafiqul I


    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  16. On the matched pairs sign test using bivariate ranked set sampling ...

    BVRSS) is introduced and investigated. We show that this test is asymptotically more efficient than its counterpart sign test based on a bivariate simple random sample (BVSRS). The asymptotic null distribution and the efficiency of the test are derived.

  17. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    Wright, L.; Coddington, O.; Pilewskie, P.


    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from

  18. Unmixing-Based Denoising as a Pre-Processing Step for Coral Reef Analysis

    Cerra, D.; Traganos, D.; Gege, P.; Reinartz, P.


    Coral reefs, among the world's most biodiverse and productive submerged habitats, have faced several mass bleaching events due to climate change during the past 35 years. In the course of this century, global warming and ocean acidification are expected to cause corals to become increasingly rare on reef systems. This will result in a sharp decrease in the biodiversity of reef communities and carbonate reef structures. Coral reefs may be mapped, characterized and monitored through remote sensing. Hyperspectral images in particular excel in being used in coral monitoring, being characterized by very rich spectral information, which results in a strong discrimination power to characterize a target of interest, and separate healthy corals from bleached ones. Being submerged habitats, coral reef systems are difficult to analyse in airborne or satellite images, as relevant information is conveyed in bands in the blue range which exhibit lower signal-to-noise ratio (SNR) with respect to other spectral ranges; furthermore, water is absorbing most of the incident solar radiation, further decreasing the SNR. Derivative features, which are important in coral analysis, result greatly affected by the resulting noise present in relevant spectral bands, justifying the need of new denoising techniques able to keep local spatial and spectral features. In this paper, Unmixing-based Denoising (UBD) is used to enable analysis of a hyperspectral image acquired over a coral reef system in the Red Sea based on derivative features. UBD reconstructs pixelwise a dataset with reduced noise effects, by forcing each spectrum to a linear combination of other reference spectra, exploiting the high dimensionality of hyperspectral datasets. Results show clear enhancements with respect to traditional denoising methods based on spatial and spectral smoothing, facilitating the coral detection task.

  19. A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL

    Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto


    Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.

  20. Simultaneous measurement of quantum yield ratio and absorption ratio between acceptor and donor by linearly unmixing excitation-emission spectra.

    Zhang, C; Lin, F; DU, M; Qu, W; Mai, Z; Qu, J; Chen, T


    Quantum yield ratio (Q A /Q D ) and absorption ratio (K A /K D ) in all excitation wavelengths used between acceptor and donor are indispensable to quantitative fluorescence resonance energy transfer (FRET) measurement based on linearly unmixing excitation-emission spectra (ExEm-spFRET). We here describe an approach to simultaneously measure Q A /Q D and K A /K D values by linearly unmixing the excitation-emission spectra of at least two different donor-acceptor tandem constructs with unknown FRET efficiency. To measure the Q A /Q D and K A /K D values of Venus (V) to Cerulean (C), we used a wide-field fluorescence microscope to image living HepG2 cells separately expressing each of four different C-V tandem constructs at different emission wavelengths with 435 nm and 470 nm excitation respectively to obtain the corresponding excitation-emission spectrum (S DA ). Every S DA was linearly unmixed into the contributions (weights) of three excitation-emission spectra of donor (W D ) and acceptor (W A ) as well as donor-acceptor sensitisation (W S ). Plot of W S /W D versus W A /W D for the four C-V plasmids from at least 40 cells indicated a linear relationship with 1.865 of absolute intercept (Q A /Q D ) and 0.273 of the reciprocal of slope (K A /K D ), which was validated by quantitative FRET measurements adopting 1.865 of Q A /Q D and 0.273 of K A /K D for C32V, C5V, CVC and VCV constructs respectively in living HepG2 cells. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  1. A Polynomial Estimate of Railway Line Delay

    Cerreto, Fabrizio; Harrod, Steven; Nielsen, Otto Anker


    Railway service may be measured by the aggregate delay over a time horizon or due to an event. Timetables for railway service may dampen aggregate delay by addition of additional process time, either supplement time or buffer time. The evaluation of these variables has previously been performed...... by numerical analysis with simulation. This paper proposes an analytical estimate of aggregate delay with a polynomial form. The function returns the aggregate delay of a railway line resulting from an initial, primary, delay. Analysis of the function demonstrates that there should be a balance between the two...

  2. Conditional Density Approximations with Mixtures of Polynomials

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre


    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  3. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray


    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  4. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray


    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  5. Polynomial solutions of the Monge-Ampère equation

    Aminov, Yu A [B.Verkin Institute for Low Temperature Physics and Engineering, National Academy of Sciences of Ukraine, Khar' kov (Ukraine)


    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction of such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.

  6. Linear operator pencils on Lie algebras and Laurent biorthogonal polynomials

    Gruenbaum, F A; Vinet, Luc; Zhedanov, Alexei


    We study operator pencils on generators of the Lie algebras sl 2 and the oscillator algebra. These pencils are linear in a spectral parameter λ. The corresponding generalized eigenvalue problem gives rise to some sets of orthogonal polynomials and Laurent biorthogonal polynomials (LBP) expressed in terms of the Gauss 2 F 1 and degenerate 1 F 1 hypergeometric functions. For special choices of the parameters of the pencils, we identify the resulting polynomials with the Hendriksen-van Rossum LBP which are widely believed to be the biorthogonal analogues of the classical orthogonal polynomials. This places these examples under the umbrella of the generalized bispectral problem which is considered here. Other (non-bispectral) cases give rise to some 'nonclassical' orthogonal polynomials including Tricomi-Carlitz and random-walk polynomials. An application to solutions of relativistic Toda chain is considered

  7. Least squares orthogonal polynomial approximation in several independent variables

    Caprari, R.S.


    This paper begins with an exposition of a systematic technique for generating orthonormal polynomials in two independent variables by application of the Gram-Schmidt orthogonalization procedure of linear algebra. It is then demonstrated how a linear least squares approximation for experimental data or an arbitrary function can be generated from these polynomials. The least squares coefficients are computed without recourse to matrix arithmetic, which ensures both numerical stability and simplicity of implementation as a self contained numerical algorithm. The Gram-Schmidt procedure is then utilised to generate a complete set of orthogonal polynomials of fourth degree. A theory for the transformation of the polynomial representation from an arbitrary basis into the familiar sum of products form is presented, together with a specific implementation for fourth degree polynomials. Finally, the computational integrity of this algorithm is verified by reconstructing arbitrary fourth degree polynomials from their values at randomly chosen points in their domain. 13 refs., 1 tab

  8. Need for higher order polynomial basis for polynomial nodal methods employed in LWR calculations

    Taiwo, T.A.; Palmiotti, G.


    The paper evaluates the accuracy and efficiency of sixth order polynomial solutions and the use of one radial node per core assembly for pressurized water reactor (PWR) core power distributions and reactivities. The computer code VARIANT was modified to calculate sixth order polynomial solutions for a hot zero power benchmark problem in which a control assembly along a core axis is assumed to be out of the core. Results are presented for the VARIANT, DIF3D-NODAL, and DIF3D-finite difference codes. The VARIANT results indicate that second order expansion of the within-node source and linear representation of the node surface currents are adequate for this problem. The results also demonstrate the improvement in the VARIANT solution when the order of the polynomial expansion of the within-node flux is increased from fourth to sixth order. There is a substantial saving in computational time for using one radial node per assembly with the sixth order expansion compared to using four or more nodes per assembly and fourth order polynomial solutions. 11 refs., 1 tab

  9. Note on Generating Orthogonal Polynomials and Their Application in Solving Complicated Polynomial Regression Tasks

    Knížek, J.; Tichý, Petr; Beránek, L.; Šindelář, Jan; Vojtěšek, B.; Bouchal, P.; Nenutil, R.; Dedík, O.


    Roč. 7, č. 10 (2010), s. 48-60 ISSN 0974-5718 Grant - others:GA MZd(CZ) NS9812; GA ČR(CZ) GAP304/10/0868 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : polynomial regression * orthogonalization * numerical methods * markers * biomarkers Subject RIV: BA - General Mathematics

  10. Multiple Meixner polynomials and non-Hermitian oscillator Hamiltonians

    Ndayiragije, François; Van Assche, Walter


    Multiple Meixner polynomials are polynomials in one variable which satisfy orthogonality relations with respect to $r>1$ different negative binomial distributions (Pascal distributions). There are two kinds of multiple Meixner polynomials, depending on the selection of the parameters in the negative binomial distribution. We recall their definition and some formulas and give generating functions and explicit expressions for the coefficients in the nearest neighbor recurrence relation. Followi...

  11. On Roots of Polynomials and Algebraically Closed Fields

    Schwarzweller Christoph


    Full Text Available In this article we further extend the algebraic theory of polynomial rings in Mizar [1, 2, 3]. We deal with roots and multiple roots of polynomials and show that both the real numbers and finite domains are not algebraically closed [5, 7]. We also prove the identity theorem for polynomials and that the number of multiple roots is bounded by the polynomial’s degree [4, 6].

  12. Open Problems Related to the Hurwitz Stability of Polynomials Segments

    Baltazar Aguirre-Hernández


    Full Text Available In the framework of robust stability analysis of linear systems, the development of techniques and methods that help to obtain necessary and sufficient conditions to determine stability of convex combinations of polynomials is paramount. In this paper, knowing that Hurwitz polynomials set is not a convex set, a brief overview of some results and open problems concerning the stability of the convex combinations of Hurwitz polynomials is then provided.

  13. General quantum polynomials: irreducible modules and Morita equivalence

    Artamonov, V A


    In this paper we continue the investigation of the structure of finitely generated modules over rings of general quantum (Laurent) polynomials. We obtain a description of the lattice of submodules of periodic finitely generated modules and describe the irreducible modules. We investigate the problem of Morita equivalence of rings of general quantum polynomials, consider properties of division rings of fractions, and solve Zariski's problem for quantum polynomials

  14. Applications of polynomial optimization in financial risk investment

    Zeng, Meilan; Fu, Hongwei


    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  15. Root and Critical Point Behaviors of Certain Sums of Polynomials


    There is an extensive literature concerning roots of sums of polynomials. Many papers and books([5], [6],. [7]) have written about these polynomials. Perhaps the most immediate question of sums of polynomials,. A + B = C, is “given bounds for the roots of A and B, what bounds can be given for the roots of C?” By. Fell [3], if ...

  16. Simulation of aspheric tolerance with polynomial fitting

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong


    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  17. Quadratic polynomial interpolation on triangular domain

    Li, Ying; Zhang, Congcong; Yu, Qian


    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  18. On factorization of generalized Macdonald polynomials

    Kononov, Ya. [Landau Institute for Theoretical Physics, Chernogolovka (Russian Federation); HSE, Math Department, Moscow (Russian Federation); Morozov, A. [ITEP, Moscow (Russian Federation); Institute for Information Transmission Problems, Moscow (Russian Federation); National Research Nuclear University MEPhI, Moscow (Russian Federation)


    A remarkable feature of Schur functions - the common eigenfunctions of cut-and-join operators from W{sub ∞} - is that they factorize at the peculiar two-parametric topological locus in the space of time variables, which is known as the hook formula for quantum dimensions of representations of U{sub q}(SL{sub N}) and which plays a big role in various applications. This factorization survives at the level of Macdonald polynomials. We look for its further generalization to generalized Macdonald polynomials (GMPs), associated in the same way with the toroidal Ding-Iohara-Miki algebras, which play the central role in modern studies in Seiberg-Witten-Nekrasov theory. In the simplest case of the first-coproduct eigenfunctions, where GMP depend on just two sets of time variables, we discover a weak factorization - on a one- (rather than four-) parametric slice of the topological locus, which is already a very non-trivial property, calling for proof and better understanding. (orig.)

  19. Positive trigonometric polynomials and signal processing applications

    Dumitrescu, Bogdan


    This revised edition is made up of two parts: theory and applications. Though many of the fundamental results are still valid and used, new and revised material is woven throughout the text. As with the original book, the theory of sum-of-squares trigonometric polynomials is presented unitarily based on the concept of Gram matrix (extended to Gram pair or Gram set). The programming environment has also evolved, and the books examples are changed accordingly. The applications section is organized as a collection of related problems that use systematically the theoretical results. All the problems are brought to a semi-definite programming form, ready to be solved with algorithms freely available, like those from the libraries SeDuMi, CVX and Pos3Poly. A new chapter discusses applications in super-resolution theory, where Bounded Real Lemma for trigonometric polynomials is an important tool. This revision is written to be more appealing and easier to use for new readers. < Features updated information on LMI...

  20. On factorization of generalized Macdonald polynomials

    Kononov, Ya.; Morozov, A.


    A remarkable feature of Schur functions—the common eigenfunctions of cut-and-join operators from W_∞ —is that they factorize at the peculiar two-parametric topological locus in the space of time variables, which is known as the hook formula for quantum dimensions of representations of U_q(SL_N) and which plays a big role in various applications. This factorization survives at the level of Macdonald polynomials. We look for its further generalization to generalized Macdonald polynomials (GMPs), associated in the same way with the toroidal Ding-Iohara-Miki algebras, which play the central role in modern studies in Seiberg-Witten-Nekrasov theory. In the simplest case of the first-coproduct eigenfunctions, where GMP depend on just two sets of time variables, we discover a weak factorization—on a one- (rather than four-) parametric slice of the topological locus, which is already a very non-trivial property, calling for proof and better understanding.

  1. From sequences to polynomials and back, via operator orderings

    Amdeberhan, Tewodros, E-mail:; Dixit, Atul, E-mail:; Moll, Victor H., E-mail: [Department of Mathematics, Tulane University, New Orleans, Louisiana 70118 (United States); De Angelis, Valerio, E-mail: [Department of Mathematics, Xavier University of Louisiana, New Orleans, Louisiana 70125 (United States); Vignat, Christophe, E-mail: [Department of Mathematics, Tulane University, New Orleans, Louisiana 70118, USA and L.S.S. Supelec, Universite d' Orsay (France)


    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  2. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Jin Jeong-Hee


    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  3. Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials

    Oksana Bihun


    Full Text Available Via a generalization of the pseudospectral method for numerical solution of differential equations, a family of nonlinear algebraic identities satisfied by the zeros of a wide class of orthogonal polynomials is derived. The generalization is based on a modification of pseudospectral matrix representations of linear differential operators proposed in the paper, which allows these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for every polynomial family pνxν=0∞ orthogonal with respect to a measure supported on the real line that satisfies some standard assumptions, as long as the polynomials in the family satisfy differential equations Apν(x=qν(xpν(x, where A is a linear differential operator and each qν(x is a polynomial of degree at most n0∈N; n0 does not depend on ν. The proposed identities generalize known identities for classical and Krall orthogonal polynomials, to the case of the nonclassical orthogonal polynomials that belong to the class described above. The generalized pseudospectral representations of the differential operator A for the case of the Sonin-Markov orthogonal polynomials, also known as generalized Hermite polynomials, are presented. The general result is illustrated by new algebraic relations satisfied by the zeros of the Sonin-Markov polynomials.

  4. Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.

    Gader, P.


    The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for

  5. Automatic endmember selection and nonlinear spectral unmixing of Lunar analog minerals

    Rommel, Daniela; Grumpe, Arne; Felder, Marian Patrik; Wöhler, Christian; Mall, Urs; Kronz, Andreas


    While the interpretation of spectral reflectance data has been widely applied to detect the presence of minerals, determining and quantifying the abundances of minerals contained by planetary surfaces is still an open problem. With this paper we address one of the two main questions arising from the spectral unmixing problem. While the mathematical mixture model has been extensively researched, considerably less work has been committed to the selection of endmembers from a possibly huge database or catalog of potential endmembers. To solve the endmember selection problem we define a new spectral similarity measure that is not purely based on the reconstruction error, i.e. the squared difference between the modeled and the measured reflectance spectrum. To select reasonable endmembers, we extend the similarity measure by adding information extracted from the spectral absorption bands. This will allow for a better separation of spectrally similar minerals. Evaluating all possible subsets of a possibly very large catalog that contain at least one endmember leads to an exponential increase in computational complexity, rendering catalogs of 20-30 endmembers impractical. To overcome this computational limitation, we propose the usage of a genetic algorithm that, while initially starting with random subsets, forms new subsets by combining the best subsets and, to some extent, does a local search around the best subsets by randomly adding a few endmembers. A Monte-Carlo simulation based on synthetic mixtures and a catalog size varying from three to eight endmembers demonstrates that the genetic algorithm is expected to require less combinations to be evaluated than an exhaustive search if the catalog comprises 10 or more endmembers. Since the genetic algorithm evaluates some combinations multiple times, we propose a simple modification and store previously evaluated endmember combinations. The resulting algorithm is shown to never require more function evaluations than a

  6. Non-invasive monitoring of cytokine-based regenerative treatment of cartilage by hyperspectral unmixing (Conference Presentation)

    Mahbub, Saabah B.; Succer, Peter; Gosnell, Martin E.; Anwaer, Ayad G.; Herbert, Benjamin; Vesey, Graham; Goldys, Ewa M.


    Extracting biochemical information from tissue autofluorescence is a promising approach to non-invasively monitor disease treatments at a cellular level, without using any external biomarkers. Our recently developed unsupervised hyperspectral unmixing by Dependent Component Analysis (DECA) provides robust and detailed metabolic information with proper account of intrinsic cellular heterogeneity. Moreover this method is compatible with established methods of fluorescent biomarker labelling. Recently adipose-derived stem cell (ADSC) - based therapies have been introduced for treating different diseases in animals and humans. ADSC have been shown promise in regenerative treatments for osteoarthritis and other bone and joint disorders. One of the mechanism of their action is their anti-inflammatory effects within osteoarthritic joints which aid the regeneration of cartilage. These therapeutic effects are known to be driven by secretions of different cytokines from the ADSCs. We have been using the hyperspectral unmixing techniques to study in-vitro the effects of ADSC-derived cytokine-rich secretions with the cartilage chip in both human and bovine samples. The study of metabolic effects of different cytokine treatment on different cartilage layers makes it possible to compare the merits of those treatments for repairing cartilage.

  7. Two new bivariate zero-inflated generalized Poisson distributions with a flexible correlation structure

    Chi Zhang


    Full Text Available To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP distributions by incorporating a multiplicative factor (or dependency parameter λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.

  8. Bivariable analysis of ventricular late potentials in high resolution ECG records

    Orosco, L; Laciar, E


    In this study the bivariable analysis for ventricular late potentials detection in high-resolution electrocardiographic records is proposed. The standard time-domain analysis and the application of the time-frequency technique to high-resolution ECG records are briefly described as well as their corresponding results. In the proposed technique the time-domain parameter, QRSD and the most significant time-frequency index, EN QRS are used like variables. A bivariable index is defined, that combines the previous parameters. The propose technique allows evaluating the risk of ventricular tachycardia in post-myocardial infarct patients. The results show that the used bivariable index allows discriminating between the patient's population with ventricular tachycardia and the subjects of the control group. Also, it was found that the bivariable technique obtains a good valuation as diagnostic test. It is concluded that comparatively, the valuation of the bivariable technique as diagnostic test is superior to that of the time-domain method and the time-frequency technique evaluated individually

  9. Relations between zeros of special polynomials associated with the Painleve equations

    Kudryashov, Nikolai A.; Demina, Maria V.


    A method for finding relations of roots of polynomials is presented. Our approach allows us to get a number of relations between the zeros of the classical polynomials as well as the roots of special polynomials associated with rational solutions of the Painleve equations. We apply the method to obtain the relations for the zeros of several polynomials. These are: the Hermite polynomials, the Laguerre polynomials, the Yablonskii-Vorob'ev polynomials, the generalized Okamoto polynomials, and the generalized Hermite polynomials. All the relations found can be considered as analogues of generalized Stieltjes relations

  10. Current advances on polynomial resultant formulations

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar


    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  11. Differential operators associated with Hermite polynomials

    Onyango Otieno, V.P.


    This paper considers the boundary value problems for the Hermite differential equation -(e -x2 y'(x))'+e -x2 y(x)=λe -x2 y(x), (x is an element of (-∞, ∞)) in both the so-called right-definite and left-definite cases based partly on a classical approach due to E.C. Titchmarsh. We then link the Titchmarsh approach with operator theoretic results in the spaces L w 2 (-∞, ∞) and H p,q 2 (-∞, ∞). The results in the left-definite case provide an indirect proof of the completeness of the Hermite polynomials in L w 2 (-∞, ∞). (author). 17 refs

  12. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai


    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  13. Connection coefficients between Boas-Buck polynomial sets

    Cheikh, Y. Ben; Chaggara, H.


    In this paper, a general method to express explicitly connection coefficients between two Boas-Buck polynomial sets is presented. As application, we consider some generalized hypergeometric polynomials, from which we derive some well-known results including duplication and inversion formulas.

  14. Mathematical Use Of Polynomials Of Different End Periods Of ...

    This paper focused on how polynomials of different end period of random numbers can be used in the application of encryption and decryption of a message. Eight steps were used in generating information on how polynomials of different end periods of random numbers in the application of encryption and decryption of a ...

  15. On the Lorentz degree of a product of polynomials

    Ait-Haddou, Rachid


    In this note, we negatively answer two questions of T. Erdélyi (1991, 2010) on possible lower bounds on the Lorentz degree of product of two polynomials. We show that the correctness of one question for degree two polynomials is a direct consequence

  16. Exponential time paradigms through the polynomial time lens

    Drucker, A.; Nederlof, J.; Santhanam, R.; Sankowski, P.; Zaroliagis, C.


    We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as

  17. On polynomial selection for the general number field sieve

    Kleinjung, Thorsten


    The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records.

  18. A Combinatorial Proof of a Result on Generalized Lucas Polynomials

    Laugier Alexandre


    Full Text Available We give a combinatorial proof of an elementary property of generalized Lucas polynomials, inspired by [1]. These polynomials in s and t are defined by the recurrence relation 〈n〉 = s〈n-1〉+t〈n-2〉 for n ≥ 2. The initial values are 〈0〉 = 2; 〈1〉= s, respectively.

  19. Animating Nested Taylor Polynomials to Approximate a Function

    Mazzone, Eric F.; Piper, Bruce R.


    The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…

  20. Some Results on the Independence Polynomial of Unicyclic Graphs

    Oboudi Mohammad Reza


    Full Text Available Let G be a simple graph on n vertices. An independent set in a graph is a set of pairwise non-adjacent vertices. The independence polynomial of G is the polynomial I(G,x=∑k=0ns(G,kxk$I(G,x = \\sum\

  1. Generalized Freud's equation and level densities with polynomial

    Home; Journals; Pramana – Journal of Physics; Volume 81; Issue 2. Generalized Freud's equation and level densities with polynomial potential. Akshat Boobna Saugata Ghosh. Research Articles Volume 81 ... Keywords. Orthogonal polynomial; Freud's equation; Dyson–Mehta method; methods of resolvents; level density.

  2. Causal networks clarify productivity-richness interrelations, bivariate plots do not

    Grace, James B.; Adler, Peter B.; Harpole, W. Stanley; Borer, Elizabeth T.; Seabloom, Eric W.


    Perhaps no other pair of variables in ecology has generated as much discussion as species richness and ecosystem productivity, as illustrated by the reactions by Pierce (2013) and others to Adler et al.'s (2011) report that empirical patterns are weak and inconsistent. Adler et al. (2011) argued we need to move beyond a focus on simplistic bivariate relationships and test mechanistic, multivariate causal hypotheses. We feel the continuing debate over productivity–richness relationships (PRRs) provides a focused context for illustrating the fundamental difficulties of using bivariate relationships to gain scientific understanding.

  3. Higher order branching of periodic orbits from polynomial isochrones

    B. Toni


    Full Text Available We discuss the higher order local bifurcations of limit cycles from polynomial isochrones (linearizable centers when the linearizing transformation is explicitly known and yields a polynomial perturbation one-form. Using a method based on the relative cohomology decomposition of polynomial one-forms complemented with a step reduction process, we give an explicit formula for the overall upper bound of branch points of limit cycles in an arbitrary $n$ degree polynomial perturbation of the linear isochrone, and provide an algorithmic procedure to compute the upper bound at successive orders. We derive a complete analysis of the nonlinear cubic Hamiltonian isochrone and show that at most nine branch points of limit cycles can bifurcate in a cubic polynomial perturbation. Moreover, perturbations with exactly two, three, four, six, and nine local families of limit cycles may be constructed.

  4. Describing Quadratic Cremer Point Polynomials by Parabolic Perturbations

    Sørensen, Dan Erik Krarup


    We describe two infinite order parabolic perturbation proceduresyielding quadratic polynomials having a Cremer fixed point. The main ideais to obtain the polynomial as the limit of repeated parabolic perturbations.The basic tool at each step is to control the behaviour of certain externalrays.......Polynomials of the Cremer type correspond to parameters at the boundary of ahyperbolic component of the Mandelbrot set. In this paper we concentrate onthe main cardioid component. We investigate the differences between two-sided(i.e. alternating) and one-sided parabolic perturbations.In the two-sided case, we prove...... the existence of polynomials having an explicitlygiven external ray accumulating both at the Cremer point and at its non-periodicpreimage. We think of the Julia set as containing a "topologists double comb".In the one-sided case we prove a weaker result: the existence of polynomials havingan explicitly given...

  5. q-analogue of the Krawtchouk and Meixner orthogonal polynomials

    Campigotto, C.; Smirnov, Yu.F.; Enikeev, S.G.


    The comparative analysis of Krawtchouk polynomials on a uniform grid with Wigner D-functions for the SU(2) group is presented. As a result the partnership between corresponding properties of the polynomials and D-functions is established giving the group-theoretical interpretation of the Krawtchouk polynomials properties. In order to extend such an analysis on the quantum groups SU q (2) and SU q (1,1), q-analogues of Krawtchouk and Meixner polynomials of a discrete variable are studied. The total set of characteristics of these polynomials is calculated, including the orthogonality condition, normalization factor, recurrent relation, the explicit analytic expression, the Rodrigues formula, the difference derivative formula and various particular cases and values. (R.P.) 22 refs.; 2 tabs

  6. Primitive polynomials selection method for pseudo-random number generator

    Anikin, I. V.; Alnajjar, Kh


    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  7. Orthogonal polynomials derived from the tridiagonal representation approach

    Alhaidari, A. D.


    The tridiagonal representation approach is an algebraic method for solving second order differential wave equations. Using this approach in the solution of quantum mechanical problems, we encounter two new classes of orthogonal polynomials whose properties give the structure and dynamics of the corresponding physical system. For a certain range of parameters, one of these polynomials has a mix of continuous and discrete spectra making it suitable for describing physical systems with both scattering and bound states. In this work, we define these polynomials by their recursion relations and highlight some of their properties using numerical means. Due to the prime significance of these polynomials in physics, we hope that our short expose will encourage experts in the field of orthogonal polynomials to study them and derive their properties (weight functions, generating functions, asymptotics, orthogonality relations, zeros, etc.) analytically.

  8. Multiple Meixner polynomials and non-Hermitian oscillator Hamiltonians

    Ndayiragije, F; Van Assche, W


    Multiple Meixner polynomials are polynomials in one variable which satisfy orthogonality relations with respect to r > 1 different negative binomial distributions (Pascal distributions). There are two kinds of multiple Meixner polynomials, depending on the selection of the parameters in the negative binomial distribution. We recall their definition and some formulas and give generating functions and explicit expressions for the coefficients in the nearest neighbor recurrence relation. Following a recent construction of Miki, Tsujimoto, Vinet and Zhedanov (for multiple Meixner polynomials of the first kind), we construct r > 1 non-Hermitian oscillator Hamiltonians in r dimensions which are simultaneously diagonalizable and for which the common eigenstates are expressed in terms of multiple Meixner polynomials of the second kind. (paper)

  9. Polynomial fuzzy model-based approach for underactuated surface vessels

    Khooban, Mohammad Hassan; Vafamand, Navid; Dragicevic, Tomislav


    The main goal of this study is to introduce a new polynomial fuzzy model-based structure for a class of marine systems with non-linear and polynomial dynamics. The suggested technique relies on a polynomial Takagi–Sugeno (T–S) fuzzy modelling, a polynomial dynamic parallel distributed compensation...... surface vessel (USV). Additionally, in order to overcome the USV control challenges, including the USV un-modelled dynamics, complex nonlinear dynamics, external disturbances and parameter uncertainties, the polynomial fuzzy model representation is adopted. Moreover, the USV-based control structure...... and a sum-of-squares (SOS) decomposition. The new proposed approach is a generalisation of the standard T–S fuzzy models and linear matrix inequality which indicated its effectiveness in decreasing the tracking time and increasing the efficiency of the robust tracking control problem for an underactuated...

  10. A note on some identities of derangement polynomials.

    Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Kwon, Jongkyum


    The problem of counting derangements was initiated by Pierre Rémond de Montmort in 1708 (see Carlitz in Fibonacci Q. 16(3):255-258, 1978, Clarke and Sved in Math. Mag. 66(5):299-303, 1993, Kim, Kim and Kwon in Adv. Stud. Contemp. Math. (Kyungshang) 28(1):1-11 2018. A derangement is a permutation that has no fixed points, and the derangement number [Formula: see text] is the number of fixed-point-free permutations on an n element set. In this paper, we study the derangement polynomials and investigate some interesting properties which are related to derangement numbers. Also, we study two generalizations of derangement polynomials, namely higher-order and r -derangement polynomials, and show some relations between them. In addition, we express several special polynomials in terms of the higher-order derangement polynomials by using umbral calculus.

  11. Towards H2-rich gas production from unmixed steam reforming of methane: Thermodynamic modeling

    Lima da Silva, Aline; Müller, Iduvirges Lourdes


    In this work, the Gibbs energy minimization method is applied to investigate the unmixed steam reforming (USR) of methane to generate hydrogen for fuel cell application. The USR process is an advanced reforming technology that relies on the use of separate air and fuel/steam feeds to create a cyclic process. Under air flow (first half of the cycle), a bed of Ni-based material is oxidized, providing the heat necessary for the steam reforming that occurs subsequently during fuel/steam feed stage (second half of the cycle). In the presence of CaO sorbent, high purity hydrogen can be produced in a single reactor. In the first part of this work, it is demonstrated that thermodynamic predictions are consistent with experimental results from USR isothermal tests under fuel/steam feed. From this, it is also verified that the reacted NiO to CH4 (NiOreacted/CH4) molar ratio is a very important parameter that affects the product gas composition and decreases with time. At the end of fuel/steam flow, the reforming reaction is the most important chemical mechanism, with H2 production reaching ∼75 mol%. On the other hand, at the beginning of fuel/steam feed stage, NiO reduction reactions dominate the equilibrium system, resulting in high CO2 selectivity, negative steam conversion and low concentrations of H2. In the second part of this paper, the effect of NiOreacted/CH4 molar ratio on the product gas composition and enthalpy change during fuel flow is investigated at different temperatures for inlet H2O/CH4 molar ratios in the range of 1.2-4, considering the USR process operated with and without CaO sorbent. During fuel/steam feed stage, the energy demand increases as time passes, because endothermic reforming reaction becomes increasingly important as this stage nears its end. Thus, the duration of the second half of the cycle is limited by the conditions under which auto-thermal operation can be achieved. In absence of CaO, H2 at concentrations of approximately 73 mol% can

  12. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League

    Koopman, S.J.; Lit, R.


    Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results

  13. A comparison of bivariate and univariate QTL mapping in livestock populations

    Sorensen Daniel


    Full Text Available Abstract This study presents a multivariate, variance component-based QTL mapping model implemented via restricted maximum likelihood (REML. The method was applied to investigate bivariate and univariate QTL mapping analyses, using simulated data. Specifically, we report results on the statistical power to detect a QTL and on the precision of parameter estimates using univariate and bivariate approaches. The model and methodology were also applied to study the effectiveness of partitioning the overall genetic correlation between two traits into a component due to many genes of small effect, and one due to the QTL. It is shown that when the QTL has a pleiotropic effect on two traits, a bivariate analysis leads to a higher statistical power of detecting the QTL and to a more precise estimate of the QTL's map position, in particular in the case when the QTL has a small effect on the trait. The increase in power is most marked in cases where the contributions of the QTL and of the polygenic components to the genetic correlation have opposite signs. The bivariate REML analysis can successfully partition the two components contributing to the genetic correlation between traits.

  14. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza


    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  15. Semi-automated detection of aberrant chromosomes in bivariate flow karyotypes

    Boschman, G. A.; Manders, E. M.; Rens, W.; Slater, R.; Aten, J. A.


    A method is described that is designed to compare, in a standardized procedure, bivariate flow karyotypes of Hoechst 33258 (HO)/Chromomycin A3 (CA) stained human chromosomes from cells with aberrations with a reference flow karyotype of normal chromosomes. In addition to uniform normalization of

  16. Carbon and oxygen isotopic ratio bi-variate distribution for marble artifacts quarry assignment

    Pentia, M.


    Statistical description, by a Gaussian bi-variate probability distribution of 13 C/ 12 C and 18 O/ 16 O isotopic ratios in the ancient marble quarries has been done and the new method for obtaining the confidence level quarry assignment for marble artifacts has been presented. (author) 8 figs., 3 tabs., 4 refs

  17. Technical note: Towards a continuous classification of climate using bivariate colour mapping

    Teuling, A.J.


    Climate is often defined in terms of discrete classes. Here I use bivariate colour mapping to show that the global distribution of K¨oppen-Geiger climate classes can largely be reproduced by combining the simple means of two key states of the climate system 5 (i.e., air temperature and relative

  18. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    Warner, Rebecca M.


    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  19. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko


    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  20. A simple approximation to the bivariate normal distribution with large correlation coefficient

    Albers, Willem/Wim; Kallenberg, W.C.M.


    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  1. vs. a polynomial chaos-based MCMC

    Siripatana, Adil


    Bayesian Inference of Manning\\'s n coefficient in a Storm Surge Model Framework: comparison between Kalman lter and polynomial based method Adil Siripatana Conventional coastal ocean models solve the shallow water equations, which describe the conservation of mass and momentum when the horizontal length scale is much greater than the vertical length scale. In this case vertical pressure gradients in the momentum equations are nearly hydrostatic. The outputs of coastal ocean models are thus sensitive to the bottom stress terms de ned through the formulation of Manning\\'s n coefficients. This thesis considers the Bayesian inference problem of the Manning\\'s n coefficient in the context of storm surge based on the coastal ocean ADCIRC model. In the first part of the thesis, we apply an ensemble-based Kalman filter, the singular evolutive interpolated Kalman (SEIK) filter to estimate both a constant Manning\\'s n coefficient and a 2-D parameterized Manning\\'s coefficient on one ideal and one of more realistic domain using observation system simulation experiments (OSSEs). We study the sensitivity of the system to the ensemble size. we also access the benefits from using an in ation factor on the filter performance. To study the limitation of the Guassian restricted assumption on the SEIK lter, 5 we also implemented in the second part of this thesis a Markov Chain Monte Carlo (MCMC) method based on a Generalized Polynomial chaos (gPc) approach for the estimation of the 1-D and 2-D Mannning\\'s n coe cient. The gPc is used to build a surrogate model that imitate the ADCIRC model in order to make the computational cost of implementing the MCMC with the ADCIRC model reasonable. We evaluate the performance of the MCMC-gPc approach and study its robustness to di erent OSSEs scenario. we also compare its estimates with those resulting from SEIK in term of parameter estimates and full distributions. we present a full analysis of the solution of these two methods, of the

  2. Spectral Unmixing Modeling of the Aristarchus Pyroclastic Deposit: Assessing the Eruptive History of Glass-Rich Regional Lunar Pyroclastic Deposits

    Jawin, E. R.; Head, J. W., III; Cannon, K.


    The Aristarchus pyroclastic deposit in central Oceanus Procellarum is understood to have formed in a gas-rich explosive volcanic eruption, and has been observed to contain abundant volcanic glass. However, the interpreted color (and therefore composition) of the glass has been debated. In addition, previous analyses of the pyroclastic deposit have been performed using lower resolution data than are currently available. In this work, a nonlinear spectral unmixing model was applied to Moon Mineralogy Mapper (M3) data of the Aristarchus plateau to investigate the detailed mineralogic and crystalline nature of the Aristarchus pyroclastic deposit by using spectra of laboratory endmembers including a suite of volcanic glasses returned from the Apollo 15 and 17 missions (green, orange, black beads), as well as synthetic lunar glasses (orange, green, red, yellow). Preliminary results of the M3 unmixing model suggest that spectra of the pyroclastic deposit can be modeled by a mixture composed predominantly of a featureless endmember approximating space weathering and a smaller component of glass. The modeled spectra were most accurate with a synthetic orange glass endmember, relative to the other glasses analyzed in this work. The results confirm that there is a detectable component of glass in the Aristarchus pyroclastic deposit which may be similar to the high-Ti orange glass seen in other regional pyroclastic deposits, with only minimal contributions of other crystalline minerals. The presence of volcanic glass in the pyroclastic deposit, with the low abundance of crystalline material, would support the model that the Aristarchus pyroclastic deposit formed in a long-duration, hawaiian-style fire fountain eruption. No significant detection of devitrified black beads in the spectral modeling results (as was observed at the Apollo 17 landing site in the Taurus-Littrow pyroclastic deposit), suggests the optical density of the eruptive plume remained low throughout the

  3. Increasing the Accuracy of Mapping Urban Forest Carbon Density by Combining Spatial Modeling and Spectral Unmixing Analysis

    Hua Sun


    Full Text Available Accurately mapping urban vegetation carbon density is challenging because of complex landscapes and mixed pixels. In this study, a novel methodology was proposed that combines a linear spectral unmixing analysis (LSUA with a linear stepwise regression (LSR, a logistic model-based stepwise regression (LMSR and k-Nearest Neighbors (kNN, to map the forest carbon density of Shenzhen City of China, using Landsat 8 imagery and sample plot data collected in 2014. The independent variables that contributed to statistically significantly improving the fit of a model to data and reducing the sum of squared errors were first selected from a total of 284 spectral variables derived from the image bands. The vegetation fraction from LSUA was then added as an independent variable. The results obtained using cross-validation showed that: (1 Compared to the methods without the vegetation information, adding the vegetation fraction increased the accuracy of mapping carbon density by 1%–9.3%; (2 As the observed values increased, the LSR and kNN residuals showed overestimates and underestimates for the smaller and larger observations, respectively, while LMSR improved the systematical over and underestimations; (3 LSR resulted in illogically negative and unreasonably large estimates, while KNN produced the greatest values of root mean square error (RMSE. The results indicate that combining the spatial modeling method LMSR and the spectral unmixing analysis LUSA, coupled with Landsat imagery, is most promising for increasing the accuracy of urban forest carbon density maps. In addition, this method has considerable potential for accurate, rapid and nondestructive prediction of urban and peri-urban forest carbon stocks with an acceptable level of error and low cost.

  4. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao


    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  5. A bivariate model for analyzing recurrent multi-type automobile failures

    Sunethra, A. A.; Sooriyarachchi, M. R.


    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by

  6. Topological quantum information, virtual Jones polynomials and Khovanov homology

    Kauffman, Louis H


    In this paper, we give a quantum statistical interpretation of the bracket polynomial state sum 〈K〉, the Jones polynomial V K (t) and virtual knot theory versions of the Jones polynomial, including the arrow polynomial. We use these quantum mechanical interpretations to give new quantum algorithms for these Jones polynomials. In those cases where the Khovanov homology is defined, the Hilbert space C(K) of our model is isomorphic with the chain complex for Khovanov homology with coefficients in the complex numbers. There is a natural unitary transformation U:C(K) → C(K) such that 〈K〉 = Trace(U), where 〈K〉 denotes the evaluation of the state sum model for the corresponding polynomial. We show that for the Khovanov boundary operator ∂:C(K) → C(K), we have the relationship ∂U + U∂ = 0. Consequently, the operator U acts on the Khovanov homology, and we obtain a direct relationship between the Khovanov homology and this quantum algorithm for the Jones polynomial. (paper)

  7. Constructing general partial differential equations using polynomial and neural networks.

    Zjavka, Ladislav; Pedrycz, Witold


    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Dynamics of polynomial Chaplygin gas warm inflation

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Chaudhary, Shahid [Sharif College of Engineering and Technology, Department of Mathematics, Lahore (Pakistan); Videla, Nelson [Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Valparaiso (Chile)


    In the present work, we study the consequences of a recently proposed polynomial inflationary potential in the context of the generalized, modified, and generalized cosmic Chaplygin gas models. In addition, we consider dissipative effects by coupling the inflation field to radiation, i.e., the inflationary dynamics is studied in the warm inflation scenario. We take into account a general parametrization of the dissipative coefficient Γ for describing the decay of the inflaton field into radiation. By studying the background and perturbative dynamics in the weak and strong dissipative regimes of warm inflation separately for the positive and negative quadratic and quartic potentials, we obtain expressions for the most relevant inflationary observables as the scalar power spectrum, the scalar spectral, and the tensor-to-scalar ratio. We construct the trajectories in the n{sub s}-r plane for several expressions of the dissipative coefficient and compare with the two-dimensional marginalized contours for (n{sub s}, r) from the latest Planck data. We find that our results are in agreement with WMAP9 and Planck 2015 data. (orig.)

  9. Global sensitivity analysis using polynomial chaos expansions

    Sudret, Bruno


    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  10. Global sensitivity analysis using polynomial chaos expansions

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail:


    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  11. Polynomial Chaos Surrogates for Bayesian Inference

    Le Maitre, Olivier


    The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency.

  12. Scattering amplitudes from multivariate polynomial division

    Mastrolia, Pierpaolo, E-mail: [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Dipartimento di Fisica e Astronomia, Universita di Padova, Padova (Italy); INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy); Mirabella, Edoardo, E-mail: [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Ossola, Giovanni, E-mail: [New York City College of Technology, City University of New York, 300 Jay Street, Brooklyn, NY 11201 (United States); Graduate School and University Center, City University of New York, 365 Fifth Avenue, New York, NY 10016 (United States); Peraro, Tiziano, E-mail: [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany)


    We show that the evaluation of scattering amplitudes can be formulated as a problem of multivariate polynomial division, with the components of the integration-momenta as indeterminates. We present a recurrence relation which, independently of the number of loops, leads to the multi-particle pole decomposition of the integrands of the scattering amplitudes. The recursive algorithm is based on the weak Nullstellensatz theorem and on the division modulo the Groebner basis associated to all possible multi-particle cuts. We apply it to dimensionally regulated one-loop amplitudes, recovering the well-known integrand-decomposition formula. Finally, we focus on the maximum-cut, defined as a system of on-shell conditions constraining the components of all the integration-momenta. By means of the Finiteness Theorem and of the Shape Lemma, we prove that the residue at the maximum-cut is parametrized by a number of coefficients equal to the number of solutions of the cut itself.

  13. q-Bernoulli numbers and q-Bernoulli polynomials revisited

    Kim Taekyun


    Full Text Available Abstract This paper performs a further investigation on the q-Bernoulli numbers and q-Bernoulli polynomials given by Acikgöz et al. (Adv Differ Equ, Article ID 951764, 9, 2010, some incorrect properties are revised. It is point out that the generating function for the q-Bernoulli numbers and polynomials is unreasonable. By using the theorem of Kim (Kyushu J Math 48, 73-86, 1994 (see Equation 9, some new generating functions for the q-Bernoulli numbers and polynomials are shown. Mathematics Subject Classification (2000 11B68, 11S40, 11S80

  14. Generalized Freud's equation and level densities with polynomial potential

    Boobna, Akshat; Ghosh, Saugata


    We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.

  15. Automorphisms of Algebras and Bochner's Property for Vector Orthogonal Polynomials

    Horozov, Emil


    We construct new families of vector orthogonal polynomials that have the property to be eigenfunctions of some differential operator. They are extensions of the Hermite and Laguerre polynomial systems. A third family, whose first member has been found by Y. Ben Cheikh and K. Douak is also constructed. The ideas behind our approach lie in the studies of bispectral operators. We exploit automorphisms of associative algebras which transform elementary vector orthogonal polynomial systems which are eigenfunctions of a differential operator into other systems of this type.

  16. Learning Read-constant Polynomials of Constant Degree modulo Composites

    Chattopadhyay, Arkadev; Gavaldá, Richard; Hansen, Kristoffer Arnsfelt


    Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class \\textACC0ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class...... is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable...

  17. Diffusion Coefficient Calculations With Low Order Legendre Polynomial and Chebyshev Polynomial Approximation for the Transport Equation in Spherical Geometry

    Yasa, F.; Anli, F.; Guengoer, S.


    We present analytical calculations of spherically symmetric radioactive transfer and neutron transport using a hypothesis of P1 and T1 low order polynomial approximation for diffusion coefficient D. Transport equation in spherical geometry is considered as the pseudo slab equation. The validity of polynomial expansionion in transport theory is investigated through a comparison with classic diffusion theory. It is found that for causes when the fluctuation of the scattering cross section dominates, the quantitative difference between the polynomial approximation and diffusion results was physically acceptable in general

  18. Bivariate functional data clustering: grouping streams based on a varying coefficient model of the stream water and air temperature relationship

    H. Li; X. Deng; Andy Dolloff; E. P. Smith


    A novel clustering method for bivariate functional data is proposed to group streams based on their water–air temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...

  19. A summation procedure for expansions in orthogonal polynomials

    Garibotti, C.R.; Grinstein, F.F.


    Approximants to functions defined by formal series expansions in orthogonal polynomials are introduced. They are shown to be convergent even out of the elliptical domain where the original expansion converges

  20. Classification of complex polynomial vector fields in one complex variable

    Branner, Bodil; Dias, Kealey


    This paper classifies the global structure of monic and centred one-variable complex polynomial vector fields. The classification is achieved by means of combinatorial and analytic data. More specifically, given a polynomial vector field, we construct a combinatorial invariant, describing...... the topology, and a set of analytic invariants, describing the geometry. Conversely, given admissible combinatorial and analytic data sets, we show using surgery the existence of a unique monic and centred polynomial vector field realizing the given invariants. This is the content of the Structure Theorem......, the main result of the paper. This result is an extension and refinement of Douady et al. (Champs de vecteurs polynomiaux sur C. Unpublished manuscript) classification of the structurally stable polynomial vector fields. We further review some general concepts for completeness and show that vector fields...

  1. Skew-orthogonal polynomials and random matrix theory

    Ghosh, Saugata


    Orthogonal polynomials satisfy a three-term recursion relation irrespective of the weight function with respect to which they are defined. This gives a simple formula for the kernel function, known in the literature as the Christoffel-Darboux sum. The availability of asymptotic results of orthogonal polynomials and the simple structure of the Christoffel-Darboux sum make the study of unitary ensembles of random matrices relatively straightforward. In this book, the author develops the theory of skew-orthogonal polynomials and obtains recursion relations which, unlike orthogonal polynomials, depend on weight functions. After deriving reduced expressions, called the generalized Christoffel-Darboux formulas (GCD), he obtains universal correlation functions and non-universal level densities for a wide class of random matrix ensembles using the GCD. The author also shows that once questions about higher order effects are considered (questions that are relevant in different branches of physics and mathematics) the ...

  2. Numerical Simulation of Polynomial-Speed Convergence Phenomenon

    Li, Yao; Xu, Hui


    We provide a hybrid method that captures the polynomial speed of convergence and polynomial speed of mixing for Markov processes. The hybrid method that we introduce is based on the coupling technique and renewal theory. We propose to replace some estimates in classical results about the ergodicity of Markov processes by numerical simulations when the corresponding analytical proof is difficult. After that, all remaining conclusions can be derived from rigorous analysis. Then we apply our results to seek numerical justification for the ergodicity of two 1D microscopic heat conduction models. The mixing rate of these two models are expected to be polynomial but very difficult to prove. In both examples, our numerical results match the expected polynomial mixing rate well.

  3. Fast parallel computation of polynomials using few processors

    Valiant, Leslie; Skyum, Sven


    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  4. Guts of surfaces and the colored Jones polynomial

    Futer, David; Purcell, Jessica


    This monograph derives direct and concrete relations between colored Jones polynomials and the topology of incompressible spanning surfaces in knot and link complements. Under mild diagrammatic hypotheses, we prove that the growth of the degree of the colored Jones polynomials is a boundary slope of an essential surface in the knot complement. We show that certain coefficients of the polynomial measure how far this surface is from being a fiber for the knot; in particular, the surface is a fiber if and only if a particular coefficient vanishes. We also relate hyperbolic volume to colored Jones polynomials. Our method is to generalize the checkerboard decompositions of alternating knots. Under mild diagrammatic hypotheses, we show that these surfaces are essential, and obtain an ideal polyhedral decomposition of their complement. We use normal surface theory to relate the pieces of the JSJ decomposition of the  complement to the combinatorics of certain surface spines (state graphs). Since state graphs have p...

  5. Solving polynomial systems using no-root elimination blending schemes

    Barton, Michael


    Searching for the roots of (piecewise) polynomial systems of equations is a crucial problem in computer-aided design (CAD), and an efficient solution is in strong demand. Subdivision solvers are frequently used to achieve this goal; however

  6. Optimal stability polynomials for numerical integration of initial value problems

    Ketcheson, David I.; Ahmadia, Aron


    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step

  7. An algebraic approach to the non-symmetric Macdonald polynomial

    Nishino, Akinori; Ujino, Hideaki; Wadati, Miki


    In terms of the raising and lowering operators, we algebraically construct the non-symmetric Macdonald polynomials which are simultaneous eigenfunctions of the commuting Cherednik operators. We also calculate Cherednik's scalar product of them

  8. An Elementary Proof of the Polynomial Matrix Spectral Factorization Theorem

    Ephremidze, Lasha


    A very simple and short proof of the polynomial matrix spectral factorization theorem (on the unit circle as well as on the real line) is presented, which relies on elementary complex analysis and linear algebra.

  9. Force prediction in cold rolling mills by polynomial methods

    Nicu ROMAN


    Full Text Available A method for steel and aluminium strip thickness control is provided including a new technique for predictive rolling force estimation method by statistic model based on polynomial techniques.

  10. Entanglement entropy and the colored Jones polynomial

    Balasubramanian, Vijay; DeCross, Matthew; Fliss, Jackson; Kar, Arjun; Leigh, Robert G.; Parrikar, Onkar


    We study the multi-party entanglement structure of states in Chern-Simons theory created by performing the path integral on 3-manifolds with linked torus boundaries, called link complements. For gauge group SU(2), the wavefunctions of these states (in a particular basis) are the colored Jones polynomials of the corresponding links. We first review the case of U(1) Chern-Simons theory where these are stabilizer states, a fact we use to re-derive an explicit formula for the entanglement entropy across a general link bipartition. We then present the following results for SU(2) Chern-Simons theory: (i) The entanglement entropy for a bipartition of a link gives a lower bound on the genus of surfaces in the ambient S 3 separating the two sublinks. (ii) All torus links (namely, links which can be drawn on the surface of a torus) have a GHZ-like entanglement structure — i.e., partial traces leave a separable state. By contrast, through explicit computation, we test in many examples that hyperbolic links (namely, links whose complements admit hyperbolic structures) have W-like entanglement — i.e., partial traces leave a non-separable state. (iii) Finally, we consider hyperbolic links in the complexified SL(2,C) Chern-Simons theory, which is closely related to 3d Einstein gravity with a negative cosmological constant. In the limit of small Newton constant, we discuss how the entanglement structure is controlled by the Neumann-Zagier potential on the moduli space of hyperbolic structures on the link complement.

  11. Quasi-topological Ricci polynomial gravities

    Li, Yue-Zhou; Liu, Hai-Shan; Lü, H.


    Quasi-topological terms in gravity can be viewed as those that give no contribution to the equations of motion for a special subclass of metric ansätze. They therefore play no rôle in constructing these solutions, but can affect the general perturbations. We consider Einstein gravity extended with Ricci tensor polynomial invariants, which admits Einstein metrics with appropriate effective cosmological constants as its vacuum solutions. We construct three types of quasi-topological gravities. The first type is for the most general static metrics with spherical, toroidal or hyperbolic isometries. The second type is for the special static metrics where g tt g rr is constant. The third type is the linearized quasitopological gravities on the Einstein metrics. We construct and classify results that are either dependent on or independent of dimensions, up to the tenth order. We then consider a subset of these three types and obtain Lovelock-like quasi-topological gravities, that are independent of the dimensions. The linearized gravities on Einstein metrics on all dimensions are simply Einstein and hence ghost free. The theories become quasi-topological on static metrics in one specific dimension, but non-trivial in others. We also focus on the quasi-topological Ricci cubic invariant in four dimensions as a specific example to study its effect on holography, including shear viscosity, thermoelectric DC conductivities and butterfly velocity. In particular, we find that the holographic diffusivity bounds can be violated by the quasi-topological terms, which can induce an extra massive mode that yields a butterfly velocity unbound above.

  12. Invariant hyperplanes and Darboux integrability of polynomial vector fields

    Zhang Xiang


    This paper is composed of two parts. In the first part, we provide an upper bound for the number of invariant hyperplanes of the polynomial vector fields in n variables. This result generalizes those given in Artes et al (1998 Pac. J. Math. 184 207-30) and Llibre and Rodriguez (2000 Bull. Sci. Math. 124 599-619). The second part gives an extension of the Darboux theory of integrability to polynomial vector fields on algebraic varieties

  13. Interpretation of stream programs: characterizing type 2 polynomial time complexity

    Férée , Hugo; Hainry , Emmanuel; Hoyrup , Mathieu; Péchoux , Romain


    International audience; We study polynomial time complexity of type 2 functionals. For that purpose, we introduce a first order functional stream language. We give criteria, named well-founded, on such programs relying on second order interpretation that characterize two variants of type 2 polynomial complexity including the Basic Feasible Functions (BFF). These charac- terizations provide a new insight on the complexity of stream programs. Finally, we adapt these results to functions over th...

  14. The Combinatorial Rigidity Conjecture is False for Cubic Polynomials

    Henriksen, Christian


    We show that there exist two cubic polynomials with connected Julia sets which are combinatorially equivalent but not topologically conjugate on their Julia sets. This disproves a conjecture by McMullen from 1995.......We show that there exist two cubic polynomials with connected Julia sets which are combinatorially equivalent but not topologically conjugate on their Julia sets. This disproves a conjecture by McMullen from 1995....

  15. Vanishing of Littlewood-Richardson polynomials is in P

    Adve, Anshul; Robichaux, Colleen; Yong, Alexander


    J. DeLoera-T. McAllister and K. D. Mulmuley-H. Narayanan-M. Sohoni independently proved that determining the vanishing of Littlewood-Richardson coefficients has strongly polynomial time computational complexity. Viewing these as Schubert calculus numbers, we prove the generalization to the Littlewood-Richardson polynomials that control equivariant cohomology of Grassmannians. We construct a polytope using the edge-labeled tableau rule of H. Thomas-A. Yong. Our proof then combines a saturation...

  16. Discrete-Time Filter Synthesis using Product of Gegenbauer Polynomials

    N. Stojanovic; N. Stamenkovic; I. Krstic


    A new approximation to design continuoustime and discrete-time low-pass filters, presented in this paper, based on the product of Gegenbauer polynomials, provides the ability of more flexible adjustment of passband and stopband responses. The design is achieved taking into account a prescribed specification, leading to a better trade-off among the magnitude and group delay responses. Many well-known continuous-time and discrete-time transitional filter based on the classical polynomial approx...

  17. Non-existence criteria for Laurent polynomial first integrals

    Shaoyun Shi


    Full Text Available In this paper we derived some simple criteria for non-existence and partial non-existence Laurent polynomial first integrals for a general nonlinear systems of ordinary differential equations $\\dot x = f(x$, $x \\in \\mathbb{R}^n$ with $f(0 = 0$. We show that if the eigenvalues of the Jacobi matrix of the vector field $f(x$ are $\\mathbb{Z}$-independent, then the system has no nontrivial Laurent polynomial integrals.

  18. Raising and Lowering Operators for Askey-Wilson Polynomials

    Siddhartha Sahi


    Full Text Available In this paper we describe two pairs of raising/lowering operators for Askey-Wilson polynomials, which result from constructions involving very different techniques. The first technique is quite elementary, and depends only on the ''classical'' properties of these polynomials, viz. the q-difference equation and the three term recurrence. The second technique is less elementary, and involves the one-variable version of the double affine Hecke algebra.

  19. Bounds and asymptotics for orthogonal polynomials for varying weights

    Levin, Eli


    This book establishes bounds and asymptotics under almost minimal conditions on the varying weights, and applies them to universality limits and entropy integrals.  Orthogonal polynomials associated with varying weights play a key role in analyzing random matrices and other topics.  This book will be of use to a wide community of mathematicians, physicists, and statisticians dealing with techniques of potential theory, orthogonal polynomials, approximation theory, as well as random matrices. .

  20. Polynomial fuzzy observer designs: a sum-of-squares approach.

    Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O


    This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.

  1. Ratio asymptotics of Hermite-Pade polynomials for Nikishin systems

    Aptekarev, A I; Lopez, Guillermo L; Rocha, I A


    The existence of ratio asymptotics is proved for a sequence of multiple orthogonal polynomials with orthogonality relations distributed among a system of m finite Borel measures with support on a bounded interval of the real line which form a so-called Nikishin system. For m=1 this result reduces to Rakhmanov's celebrated theorem on the ratio asymptotics for orthogonal polynomials on the real line.

  2. Families of superintegrable Hamiltonians constructed from exceptional polynomials

    Post, Sarah; Tsujimoto, Satoshi; Vinet, Luc


    We introduce a family of exactly-solvable two-dimensional Hamiltonians whose wave functions are given in terms of Laguerre and exceptional Jacobi polynomials. The Hamiltonians contain purely quantum terms which vanish in the classical limit leaving only a previously known family of superintegrable systems. Additional, higher-order integrals of motion are constructed from ladder operators for the considered orthogonal polynomials proving the quantum system to be superintegrable. (paper)

  3. Lower bounds for the circuit size of partially homogeneous polynomials

    Le, Hong-Van


    Roč. 225, č. 4 (2017), s. 639-657 ISSN 1072-3374 Institutional support: RVO:67985840 Keywords : partially homogeneous polynomials * polynomials Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  4. Euler Polynomials and Identities for Non-Commutative Operators

    De Angelis, V.; Vignat, C.


    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt, expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, due to J.-C. Pain, links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Fig...

  5. Conference on Commutative rings, integer-valued polynomials and polynomial functions

    Frisch, Sophie; Glaz, Sarah; Commutative Algebra : Recent Advances in Commutative Rings, Integer-Valued Polynomials, and Polynomial Functions


    This volume presents a multi-dimensional collection of articles highlighting recent developments in commutative algebra. It also includes an extensive bibliography and lists a substantial number of open problems that point to future directions of research in the represented subfields. The contributions cover areas in commutative algebra that have flourished in the last few decades and are not yet well represented in book form. Highlighted topics and research methods include Noetherian and non- Noetherian ring theory as well as integer-valued polynomials and functions. Specific topics include: ·    Homological dimensions of Prüfer-like rings ·    Quasi complete rings ·    Total graphs of rings ·    Properties of prime ideals over various rings ·    Bases for integer-valued polynomials ·    Boolean subrings ·    The portable property of domains ·    Probabilistic topics in Intn(D) ·    Closure operations in Zariski-Riemann spaces of valuation domains ·    Stability of do...

  6. An overview on polynomial approximation of NP-hard problems

    Paschos Vangelis Th.


    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  7. Imaging characteristics of Zernike and annular polynomial aberrations.

    Mahajan, Virendra N; Díaz, José Antonio


    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  8. Polynomial asymptotic stability of damped stochastic differential equations

    John Appleby


    Full Text Available The paper studies the polynomial convergence of solutions of a scalar nonlinear It\\^{o} stochastic differential equation\\[dX(t = -f(X(t\\,dt + \\sigma(t\\,dB(t\\] where it is known, {\\it a priori}, that $\\lim_{t\\rightarrow\\infty} X(t=0$, a.s. The intensity of the stochastic perturbation $\\sigma$ is a deterministic, continuous and square integrable function, which tends to zero more quickly than a polynomially decaying function. The function $f$ obeys $\\lim_{x\\rightarrow 0}\\mbox{sgn}(xf(x/|x|^\\beta = a$, for some $\\beta>1$, and $a>0$.We study two asymptotic regimes: when $\\sigma$ tends to zero sufficiently quickly the polynomial decay rate of solutions is the same as for the deterministic equation (when $\\sigma\\equiv0$. When $\\sigma$ decays more slowly, a weaker almost sure polynomial upper bound on the decay rate of solutions is established. Results which establish the necessity for $\\sigma$ to decay polynomially in order to guarantee the almost sure polynomial decay of solutions are also proven.

  9. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    Wang, Weiguang; Fu, Jianyu


    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  10. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John


    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  11. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Kim, Yuneung; Lim, Johan; Park, DoHwan


    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Genetics of Obesity Traits: A Bivariate Genome-Wide Association Analysis

    Wu, Yili; Duan, Haiping; Tian, Xiaocao


    Previous genome-wide association studies on anthropometric measurements have identified more than 100 related loci, but only a small portion of heritability in obesity was explained. Here we present a bivariate twin study to look for the genetic variants associated with body mass index and waist......-hip ratio, and to explore the obesity-related pathways in Northern Han Chinese. Cholesky decompositionmodel for 242monozygotic and 140 dizygotic twin pairs indicated a moderate genetic correlation (r = 0.53, 95%CI: 0.42–0.64) between body mass index and waist-hip ratio. Bivariate genome-wide association.......05. Expression quantitative trait loci analysis identified rs2242044 as a significant cis-eQTL in both the normal adipose-subcutaneous (P = 1.7 × 10−9) and adipose-visceral (P = 4.4 × 10−15) tissue. These findings may provide an important entry point to unravel genetic pleiotropy in obesity traits....

  13. On minimum divergence adaptation of discrete bivariate distributions to given marginals

    Vajda, Igor; van der Meulen, E. C.


    Roč. 51, č. 1 (2005), s. 313-320 ISSN 0018-9448 R&D Projects: GA ČR GA201/02/1391; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : approximation of contingency tables * bivariate discrete distributions * minimization of divergences Subject RIV: BD - Theory of Information Impact factor: 2.183, year: 2005

  14. Evidence for bivariate linkage of obesity and HDL-C levels in the Framingham Heart Study.

    Arya, Rector; Lehman, Donna; Hunt, Kelly J; Schneider, Jennifer; Almasy, Laura; Blangero, John; Stern, Michael P; Duggirala, Ravindranath


    Epidemiological studies have indicated that obesity and low high-density lipoprotein (HDL) levels are strong cardiovascular risk factors, and that these traits are inversely correlated. Despite the belief that these traits are correlated in part due to pleiotropy, knowledge on specific genes commonly affecting obesity and dyslipidemia is very limited. To address this issue, we first conducted univariate multipoint linkage analysis for body mass index (BMI) and HDL-C to identify loci influencing variation in these phenotypes using Framingham Heart Study data relating to 1702 subjects distributed across 330 pedigrees. Subsequently, we performed bivariate multipoint linkage analysis to detect common loci influencing covariation between these two traits. We scanned the genome and identified a major locus near marker D6S1009 influencing variation in BMI (LOD = 3.9) using the program SOLAR. We also identified a major locus for HDL-C near marker D2S1334 on chromosome 2 (LOD = 3.5) and another region near marker D6S1009 on chromosome 6 with suggestive evidence for linkage (LOD = 2.7). Since these two phenotypes have been independently mapped to the same region on chromosome 6q, we used the bivariate multipoint linkage approach using SOLAR. The bivariate linkage analysis of BMI and HDL-C implicated the genetic region near marker D6S1009 as harboring a major gene commonly influencing these phenotypes (bivariate LOD = 6.2; LODeq = 5.5) and appears to improve power to map the correlated traits to a region, precisely. We found substantial evidence for a quantitative trait locus with pleiotropic effects, which appears to influence both BMI and HDL-C phenotypes in the Framingham data.

  15. The bivariate probit model of uncomplicated control of tumor: a heuristic exposition of the methodology

    Herbert, Donald


    Purpose: To describe the concept, models, and methods for the construction of estimates of joint probability of uncomplicated control of tumors in radiation oncology. Interpolations using this model can lead to the identification of more efficient treatment regimens for an individual patient. The requirement to find the treatment regimen that will maximize the joint probability of uncomplicated control of tumors suggests a new class of evolutionary experimental designs--Response Surface Methods--for clinical trials in radiation oncology. Methods and Materials: The software developed by Lesaffre and Molenberghs is used to construct bivariate probit models of the joint probability of uncomplicated control of cancer of the oropharynx from a set of 45 patients for each of whom the presence/absence of recurrent tumor (the binary event E-bar 1 /E 1 ) and the presence/absence of necrosis (the binary event E 2 /E-bar 2 ) of the normal tissues of the target volume is recorded, together with the treatment variables dose, time, and fractionation. Results: The bivariate probit model can be used to select a treatment regime that will give a specified probability, say P(S) = 0.60, of uncomplicated control of tumor by interpolation within a set of treatment regimes with known outcomes of recurrence and necrosis. The bivariate probit model can be used to guide a sequence of clinical trials to find the maximum probability of uncomplicated control of tumor for patients in a given prognostic stratum using Response Surface methods by extrapolation from an initial set of treatment regimens. Conclusions: The design of treatments for individual patients and the design of clinical trials might be improved by use of a bivariate probit model and Response Surface Methods

  16. Comparison of Six Methods for the Detection of Causality in a Bivariate Time Series

    Krakovská, A.; Jakubík, J.; Chvosteková, M.; Coufal, David; Jajcay, Nikola; Paluš, Milan


    Roč. 97, č. 4 (2018), č. článku 042207. ISSN 2470-0045 R&D Projects: GA MZd(CZ) NV15-33250A Institutional support: RVO:67985807 Keywords : comparative study * causality detection * bivariate models * Granger causality * transfer entropy * convergent cross mappings Impact factor: 2.366, year: 2016

  17. Can the bivariate Hurst exponent be higher than an average of the separate Hurst exponents?

    Krištoufek, Ladislav


    Roč. 431, č. 1 (2015), s. 124-127 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : Correlations * Power- law cross-correlations * Bivariate Hurst exponent * Spectrum coherence Subject RIV: AH - Economics Impact factor: 1.785, year: 2015

  18. Bivariate return periods of temperature and precipitation explain a large fraction of European crop yields

    Zscheischler, Jakob; Orth, Rene; Seneviratne, Sonia I.


    Crops are vital for human society. Crop yields vary with climate and it is important to understand how climate and crop yields are linked to ensure future food security. Temperature and precipitation are among the key driving factors of crop yield variability. Previous studies have investigated mostly linear relationships between temperature and precipitation and crop yield variability. Other research has highlighted the adverse impacts of climate extremes, such as drought and heat waves, on crop yields. Impacts are, however, often non-linearly related to multivariate climate conditions. Here we derive bivariate return periods of climate conditions as indicators for climate variability along different temperature-precipitation gradients. We show that in Europe, linear models based on bivariate return periods of specific climate conditions explain on average significantly more crop yield variability (42 %) than models relying directly on temperature and precipitation as predictors (36 %). Our results demonstrate that most often crop yields increase along a gradient from hot and dry to cold and wet conditions, with lower yields associated with hot and dry periods. The majority of crops are most sensitive to climate conditions in summer and to maximum temperatures. The use of bivariate return periods allows the integration of non-linear impacts into climate-crop yield analysis. This offers new avenues to study the link between climate and crop yield variability and suggests that they are possibly more strongly related than what is inferred from conventional linear models.

  19. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    Sun, Ying


    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  20. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    Sun, Ying; Hering, Amanda S.; Browning, Joshua M.


    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  1. Okounkov's BC-Type Interpolation Macdonald Polynomials and Their q=1 Limit

    Koornwinder, T.H.


    This paper surveys eight classes of polynomials associated with A-type and BC-type root systems: Jack, Jacobi, Macdonald and Koornwinder polynomials and interpolation (or shifted) Jack and Macdonald polynomials and their BC-type extensions. Among these the BC-type interpolation Jack polynomials were

  2. An Improved STARFM with Help of an Unmixing-Based Method to Generate High Spatial and Temporal Resolution Remote Sensing Data in Complex Heterogeneous Regions.

    Xie, Dengfeng; Zhang, Jinshui; Zhu, Xiufang; Pan, Yaozhong; Liu, Hongli; Yuan, Zhoumiqi; Yun, Ya


    Remote sensing technology plays an important role in monitoring rapid changes of the Earth's surface. However, sensors that can simultaneously provide satellite images with both high temporal and spatial resolution haven't been designed yet. This paper proposes an improved spatial and temporal adaptive reflectance fusion model (STARFM) with the help of an Unmixing-based method (USTARFM) to generate the high spatial and temporal data needed for the study of heterogeneous areas. The results showed that the USTARFM had higher accuracy than STARFM methods in two aspects of analysis: individual bands and of heterogeneity analysis. Taking the predicted NIR band as an example, the correlation coefficients (r) for the USTARFM, STARFM and unmixing methods were 0.96, 0.95, 0.90, respectively (p-value data fusion problems faced when using STARFM. Additionally, the USTARFM method could help researchers achieve better performance than STARFM at a smaller window size from its heterogeneous land surface quantitative representation.

  3. Evaluating the Performance of Polynomial Regression Method with Different Parameters during Color Characterization

    Bangyong Sun


    Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.

  4. Discriminants and functional equations for polynomials orthogonal on the unit circle

    Ismail, M.E.H.; Witte, N.S.


    We derive raising and lowering operators for orthogonal polynomials on the unit circle and find second order differential and q-difference equations for these polynomials. A general functional equation is found which allows one to relate the zeros of the orthogonal polynomials to the stationary values of an explicit quasi-energy and implies recurrences on the orthogonal polynomial coefficients. We also evaluate the discriminants and quantized discriminants of polynomials orthogonal on the unit circle

  5. PLOTNFIT.4TH, Data Plotting and Curve Fitting by Polynomials

    Schiffgens, J.O.


    1 - Description of program or function: PLOTnFIT is used for plotting and analyzing data by fitting nth degree polynomials of basis functions to the data interactively and printing graphs of the data and the polynomial functions. It can be used to generate linear, semi-log, and log-log graphs and can automatically scale the coordinate axes to suit the data. Multiple data sets may be plotted on a single graph. An auxiliary program, READ1ST, is included which produces an on-line summary of the information contained in the PLOTnFIT reference report. 2 - Method of solution: PLOTnFIT uses the least squares method to calculate the coefficients of nth-degree (up to 10. degree) polynomials of 11 selected basis functions such that each polynomial fits the data in a least squares sense. The procedure incorporated in the code uses a linear combination of orthogonal polynomials to avoid 'i11-conditioning' and to perform the curve fitting task with single-precision arithmetic. 3 - Restrictions on the complexity of the problem - Maxima of: 225 data points per job (or graph) including all data sets 8 data sets (or tasks) per job (or graph)

  6. Polynomial algebra of discrete models in systems biology.

    Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard


    An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. Supplementary data are available at Bioinformatics online.

  7. Nuclear-magnetic-resonance quantum calculations of the Jones polynomial

    Marx, Raimund; Spoerl, Andreas; Pomplun, Nikolas; Schulte-Herbrueggen, Thomas; Glaser, Steffen J.; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Myers, John M.


    The repertoire of problems theoretically solvable by a quantum computer recently expanded to include the approximate evaluation of knot invariants, specifically the Jones polynomial. The experimental implementation of this evaluation, however, involves many known experimental challenges. Here we present experimental results for a small-scale approximate evaluation of the Jones polynomial by nuclear magnetic resonance (NMR); in addition, we show how to escape from the limitations of NMR approaches that employ pseudopure states. Specifically, we use two spin-1/2 nuclei of natural abundance chloroform and apply a sequence of unitary transforms representing the trefoil knot, the figure-eight knot, and the Borromean rings. After measuring the nuclear spin state of the molecule in each case, we are able to estimate the value of the Jones polynomial for each of the knots.

  8. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    Narkawicz, Anthony; Munoz, Cesar


    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  9. A probabilistic approach of sum rules for heat polynomials

    Vignat, C; Lévêque, O


    In this paper, we show that the sum rules for generalized Hermite polynomials derived by Daboul and Mizrahi (2005 J. Phys. A: Math. Gen. and by Graczyk and Nowak (2004 C. R. Acad. Sci., Ser. 1 338 849) can be interpreted and easily recovered using a probabilistic moment representation of these polynomials. The covariance property of the raising operator of the harmonic oscillator, which is at the origin of the identities proved in Daboul and Mizrahi and the dimension reduction effect expressed in the main result of Graczyk and Nowak are both interpreted in terms of the rotational invariance of the Gaussian distributions. As an application of these results, we uncover a probabilistic moment interpretation of two classical integrals of the Wigner function that involve the associated Laguerre polynomials. (paper)

  10. Local polynomial Whittle estimation of perturbed fractional processes

    Frederiksen, Per; Nielsen, Frank; Nielsen, Morten Ørregaard

    We propose a semiparametric local polynomial Whittle with noise (LPWN) estimator of the memory parameter in long memory time series perturbed by a noise term which may be serially correlated. The estimator approximates the spectrum of the perturbation as well as that of the short-memory component...... of the signal by two separate polynomials. Including these polynomials we obtain a reduction in the order of magnitude of the bias, but also in‡ate the asymptotic variance of the long memory estimate by a multiplicative constant. We show that the estimator is consistent for d 2 (0; 1), asymptotically normal...... for d ε (0, 3/4), and if the spectral density is infinitely smooth near frequency zero, the rate of convergence can become arbitrarily close to the parametric rate, pn. A Monte Carlo study reveals that the LPWN estimator performs well in the presence of a serially correlated perturbation term...

  11. Fractional order differentiation by integration with Jacobi polynomials

    Liu, Dayan


    The differentiation by integration method with Jacobi polynomials was originally introduced by Mboup, Join and Fliess [22], [23]. This paper generalizes this method from the integer order to the fractional order for estimating the fractional order derivatives of noisy signals. The proposed fractional order differentiator is deduced from the Jacobi orthogonal polynomial filter and the Riemann-Liouville fractional order derivative definition. Exact and simple formula for this differentiator is given where an integral formula involving Jacobi polynomials and the noisy signal is used without complex mathematical deduction. Hence, it can be used both for continuous-time and discrete-time models. The comparison between our differentiator and the recently introduced digital fractional order Savitzky-Golay differentiator is given in numerical simulations so as to show its accuracy and robustness with respect to corrupting noises. © 2012 IEEE.

  12. Synchronization of generalized Henon map using polynomial controller

    Lam, H.K.


    This Letter presents the chaos synchronization of two discrete-time generalized Henon map, namely the drive and response systems. A polynomial controller is proposed to drive the system states of the response system to follow those of the drive system. The system stability of the error system formed by the drive and response systems and the synthesis of the polynomial controller are investigated using the sum-of-squares (SOS) technique. Based on the Lyapunov stability theory, stability conditions in terms of SOS are derived to guarantee the system stability and facilitate the controller synthesis. By satisfying the SOS-based stability conditions, chaotic synchronization is achieved. The solution of the SOS-based stability conditions can be found numerically using the third-party Matlab toolbox SOSTOOLS. A simulation example is given to illustrate the merits of the proposed polynomial control approach.

  13. The Kauffman bracket and the Jones polynomial in quantum gravity

    Griego, J.


    In the loop representation the quantum states of gravity are given by knot invariants. From general arguments concerning the loop transform of the exponential of the Chern-Simons form, a certain expansion of the Kauffman bracket knot polynomial can be formally viewed as a solution of the Hamiltonian constraint with a cosmological constant in the loop representation. The Kauffman bracket is closely related to the Jones polynomial. In this paper the operation of the Hamiltonian on the power expansions of the Kauffman bracket and Jones polynomials is analyzed. It is explicitly shown that the Kauffman bracket is a formal solution of the Hamiltonian constraint to third order in the cosmological constant. We make use of the extended loop representation of quantum gravity where the analytic calculation can be thoroughly accomplished. Some peculiarities of the extended loop calculus are considered and the significance of the results to the case of the conventional loop representation is discussed. (orig.)

  14. Polynomial chaos expansion with random and fuzzy variables

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.


    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  15. Fractional order differentiation by integration with Jacobi polynomials

    Liu, Dayan; Gibaru, O.; Perruquetti, Wilfrid; Laleg-Kirati, Taous-Meriem


    The differentiation by integration method with Jacobi polynomials was originally introduced by Mboup, Join and Fliess [22], [23]. This paper generalizes this method from the integer order to the fractional order for estimating the fractional order derivatives of noisy signals. The proposed fractional order differentiator is deduced from the Jacobi orthogonal polynomial filter and the Riemann-Liouville fractional order derivative definition. Exact and simple formula for this differentiator is given where an integral formula involving Jacobi polynomials and the noisy signal is used without complex mathematical deduction. Hence, it can be used both for continuous-time and discrete-time models. The comparison between our differentiator and the recently introduced digital fractional order Savitzky-Golay differentiator is given in numerical simulations so as to show its accuracy and robustness with respect to corrupting noises. © 2012 IEEE.

  16. Real zeros of classes of random algebraic polynomials

    K. Farahmand


    Full Text Available There are many known asymptotic estimates for the expected number of real zeros of an algebraic polynomial a0+a1x+a2x2+⋯+an−1xn−1 with identically distributed random coefficients. Under different assumptions for the distribution of the coefficients {aj}j=0n−1 it is shown that the above expected number is asymptotic to O(logn. This order for the expected number of zeros remains valid for the case when the coefficients are grouped into two, each group with a different variance. However, it was recently shown that if the coefficients are non-identically distributed such that the variance of the jth term is (nj the expected number of zeros of the polynomial increases to O(n. The present paper provides the value for this asymptotic formula for the polynomials with the latter variances when they are grouped into three with different patterns for their variances.

  17. a Unified Matrix Polynomial Approach to Modal Identification

    Allemang, R. J.; Brown, D. L.


    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  18. Euler polynomials and identities for non-commutative operators

    De Angelis, Valerio; Vignat, Christophe


    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt [Phys. Rev. D 54(12), 7710-7723 (1996)], expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, by Pain [J. Phys. A: Math. Theor. 46, 035304 (2013)], links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Figuieira de Morisson and Fring [J. Phys. A: Math. Gen. 39, 9269 (2006)] in the context of non-Hermitian Hamiltonian systems. In each case, we provide several proofs and extensions of these identities that highlight the role of Euler and Bernoulli polynomials.


    Moustafa Omar Ahmed Abu - Shawiesh


    Full Text Available This paper proposed and considered some bivariate control charts to monitor individual observations from a statistical process control. Usual control charts which use mean and variance-covariance estimators are sensitive to outliers. We consider the following robust alternatives to the classical Hoteling's T2: T2MedMAD, T2MCD, T2MVE a simulation study has been conducted to compare the performance of these control charts. Two real life data are analyzed to illustrate the application of these robust alternatives.

  20. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Campbell, C. W.


    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  1. A comparison between multivariate and bivariate analysis used in marketing research

    Constantin, C.


    Full Text Available This paper is about an instrumental research conducted in order to compare the information given by two multivariate data analysis in comparison with the usual bivariate analysis. The outcomes of the research reveal that sometimes the multivariate methods use more information from a certain variable, but sometimes they use only a part of the information considered the most important for certain associations. For this reason, a researcher should use both categories of data analysis in order to obtain entirely useful information.

  2. Bivariate Drought Analysis Using Streamflow Reconstruction with Tree Ring Indices in the Sacramento Basin, California, USA

    Jaewon Kwak


    Full Text Available Long-term streamflow data are vital for analysis of hydrological droughts. Using an artificial neural network (ANN model and nine tree-ring indices, this study reconstructed the annual streamflow of the Sacramento River for the period from 1560 to 1871. Using the reconstructed streamflow data, the copula method was used for bivariate drought analysis, deriving a hydrological drought return period plot for the Sacramento River basin. Results showed strong correlation among drought characteristics, and the drought with a 20-year return period (17.2 million acre-feet (MAF per year in the Sacramento River basin could be considered a critical level of drought for water shortages.

  3. On the construction of bivariate exponential distributions with an arbitrary correlation coefficient

    Bladt, Mogens; Nielsen, Bo Friis

    In this paper we use a concept of multivariate phase-type distributions to define a class of bivariate exponential distributions. This class has the following three appealing properties. Firstly, we may construct a pair of exponentially distributed random variables with any feasible correlation...... coefficient (also negative). Secondly, the class satisfies that any linear combination (projection) of the marginal random variables is a phase {type distributions, The latter property is potentially important for the development hypothesis testing in linear models. Thirdly, it is very easy to simulate...

  4. Local polynomial Whittle estimation covering non-stationary fractional processes

    Nielsen, Frank

    to the non-stationary region. By approximating the short-run component of the spectrum by a polynomial, instead of a constant, in a shrinking neighborhood of zero we alleviate some of the bias that the classical local Whittle estimators is prone to. This bias reduction comes at a cost as the variance is in...... study illustrates the performance of the proposed estimator compared to the classical local Whittle estimator and the local polynomial Whittle estimator. The empirical justi.cation of the proposed estimator is shown through an analysis of credit spreads....

  5. The algebra of Weyl symmetrised polynomials and its quantum extension

    Gelfand, I.M.; Fairlie, D.B.


    The Algebra of Weyl symmetrised polynomials in powers of Hamiltonian operators P and Q which satisfy canonical commutation relations is constructed. This algebra is shown to encompass all recent infinite dimensional algebras acting on two-dimensional phase space. In particular the Moyal bracket algebra and the Poisson bracket algebra, of which the Moyal is the unique one parameter deformation are shown to be different aspects of this infinite algebra. We propose the introduction of a second deformation, by the replacement of the Heisenberg algebra for P, Q with a q-deformed commutator, and construct algebras of q-symmetrised Polynomials. (orig.)

  6. Skew-orthogonal polynomials, differential systems and random matrix theory

    Ghosh, S.


    We study skew-orthogonal polynomials with respect to the weight function exp[-2V (x)], with V (x) = Σ K=1 2d (u K /K)x K , u 2d > 0, d > 0. A finite subsequence of such skew-orthogonal polynomials arising in the study of Orthogonal and Symplectic ensembles of random matrices, satisfy a system of differential-difference-deformation equation. The vectors formed by such subsequence has the rank equal to the degree of the potential in the quaternion sense. These solutions satisfy certain compatibility condition and hence admit a simultaneous fundamental system of solutions. (author)

  7. Orthogonal polynomials, Laguerre Fock space, and quasi-classical asymptotics

    Engliš, Miroslav; Ali, S. Twareque


    Continuing our earlier investigation of the Hermite case [S. T. Ali and M. Engliš, J. Math. Phys. 55, 042102 (2014)], we study an unorthodox variant of the Berezin-Toeplitz quantization scheme associated with Laguerre polynomials. In particular, we describe a "Laguerre analogue" of the classical Fock (Segal-Bargmann) space and the relevant semi-classical asymptotics of its Toeplitz operators; the former actually turns out to coincide with the Hilbert space appearing in the construction of the well-known Barut-Girardello coherent states. Further extension to the case of Legendre polynomials is likewise discussed.

  8. Discrete-Time Filter Synthesis using Product of Gegenbauer Polynomials

    N. Stojanovic


    Full Text Available A new approximation to design continuoustime and discrete-time low-pass filters, presented in this paper, based on the product of Gegenbauer polynomials, provides the ability of more flexible adjustment of passband and stopband responses. The design is achieved taking into account a prescribed specification, leading to a better trade-off among the magnitude and group delay responses. Many well-known continuous-time and discrete-time transitional filter based on the classical polynomial approximations(Chebyshev, Legendre, Butterworth are shown to be a special cases of proposed approximation method.

  9. Weierstrass method for quaternionic polynomial root-finding

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana


    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  10. Orthogonal polynomials on the unit circle part 2 spectral theory

    Simon, Barry


    This two-part book is a comprehensive overview of the theory of probability measures on the unit circle, viewed especially in terms of the orthogonal polynomials defined by those measures. A major theme involves the connections between the Verblunsky coefficients (the coefficients of the recurrence equation for the orthogonal polynomials) and the measures, an analog of the spectral theory of one-dimensional Schrödinger operators. Among the topics discussed along the way are the asymptotics of Toeplitz determinants (Szegő's theorems), limit theorems for the density of the zeros of orthogonal po

  11. Orthogonal polynomials on the unit circle part 1 classical theory


    This two-part book is a comprehensive overview of the theory of probability measures on the unit circle, viewed especially in terms of the orthogonal polynomials defined by those measures. A major theme involves the connections between the Verblunsky coefficients (the coefficients of the recurrence equation for the orthogonal polynomials) and the measures, an analog of the spectral theory of one-dimensional Schrodinger operators. Among the topics discussed along the way are the asymptotics of Toeplitz determinants (Szegő's theorems), limit theorems for the density of the zeros of orthogonal po

  12. The neighbourhood polynomial of some families of dendrimers

    Nazri Husin, Mohamad; Hasni, Roslan


    The neighbourhood polynomial N(G,x) is generating function for the number of faces of each cardinality in the neighbourhood complex of a graph and it is defined as (G,x)={\\sum }U\\in N(G){x}|U|, where N(G) is neighbourhood complex of a graph, whose vertices of the graph and faces are subsets of vertices that have a common neighbour. A dendrimers is an artificially manufactured or synthesized molecule built up from branched units called monomers. In this paper, we compute this polynomial for some families of dendrimer.

  13. Gaussian polynomials and content ideal in trivial extensions

    Bakkari, C.; Mahdou, N.


    The goal of this paper is to exhibit a class of Gaussian non-coherent rings R (with zero-divisors) such that wdim(R) = ∞ and fPdim(R) is always at most one and also exhibits a new class of rings (with zerodivisors) which are neither locally Noetherian nor locally domain where Gaussian polynomials have a locally principal content. For this purpose, we study the possible transfer of the 'Gaussian' property and the property 'the content ideal of a Gaussian polynomial is locally principal' to various trivial extension contexts. This article includes a brief discussion of the scopes and limits of our result. (author)

  14. M-Polynomial and Related Topological Indices of Nanostar Dendrimers

    Mobeen Munir


    Full Text Available Dendrimers are highly branched organic macromolecules with successive layers of branch units surrounding a central core. The M-polynomial of nanotubes has been vastly investigated as it produces many degree-based topological indices. These indices are invariants of the topology of graphs associated with molecular structure of nanomaterials to correlate certain physicochemical properties like boiling point, stability, strain energy, etc. of chemical compounds. In this paper, we first determine M-polynomials of some nanostar dendrimers and then recover many degree-based topological indices.

  15. On the Lojasiewicz exponent at infinity of real polynomials

    Ha Huy Vui; Pham Tien Son


    Let f : R n → R be a nonconstant polynomial function. In this paper, using the information from 'the curve of tangency' of f, we provide a method to determine the Lojasiewicz exponent at infinity of f. As a corollary, we give a computational criterion to decide if the Lojasiewicz exponent at infinity is finite or not. Then, we obtain a formula to calculate the set of points at which the polynomial f is not proper. Moreover, a relation between the Lojasiewicz exponent at infinity of f with the problem of computing the global optimum of f is also established. (author)

  16. Regression analysis for bivariate gap time with missing first gap time data.

    Huang, Chia-Hui; Chen, Yi-Hau


    We consider ordered bivariate gap time while data on the first gap time are unobservable. This study is motivated by the HIV infection and AIDS study, where the initial HIV contracting time is unavailable, but the diagnosis times for HIV and AIDS are available. We are interested in studying the risk factors for the gap time between initial HIV contraction and HIV diagnosis, and gap time between HIV and AIDS diagnoses. Besides, the association between the two gap times is also of interest. Accordingly, in the data analysis we are faced with two-fold complexity, namely data on the first gap time is completely missing, and the second gap time is subject to induced informative censoring due to dependence between the two gap times. We propose a modeling framework for regression analysis of bivariate gap time under the complexity of the data. The estimating equations for the covariate effects on, as well as the association between, the two gap times are derived through maximum likelihood and suitable counting processes. Large sample properties of the resulting estimators are developed by martingale theory. Simulations are performed to examine the performance of the proposed analysis procedure. An application of data from the HIV and AIDS study mentioned above is reported for illustration.

  17. Geovisualization of land use and land cover using bivariate maps and Sankey flow diagrams

    Strode, Georgianna; Mesev, Victor; Thornton, Benjamin; Jerez, Marjorie; Tricarico, Thomas; McAlear, Tyler


    The terms `land use' and `land cover' typically describe categories that convey information about the landscape. Despite the major difference of land use implying some degree of anthropogenic disturbance, the two terms are commonly used interchangeably, especially when anthropogenic disturbance is ambiguous, say managed forestland or abandoned agricultural fields. Cartographically, land use and land cover are also sometimes represented interchangeably within common legends, giving with the impression that the landscape is a seamless continuum of land use parcels spatially adjacent to land cover tracts. We believe this is misleading, and feel we need to reiterate the well-established symbiosis of land uses as amalgams of land covers; in other words land covers are subsets of land use. Our paper addresses this spatially complex, and frequently ambiguous relationship, and posits that bivariate cartographic techniques are an ideal vehicle for representing both land use and land cover simultaneously. In more specific terms, we explore the use of nested symbology as ways to represent graphically land use and land cover, where land cover are circles nested with land use squares. We also investigate bivariate legends for representing statistical covariance as a means for visualizing the combinations of land use and cover. Lastly, we apply Sankey flow diagrams to further illustrate the complex, multifaceted relationships between land use and land cover. Our work is demonstrated on data representing land use and cover data for the US state of Florida.

  18. Bivariate pointing movements on large touch screens: investigating the validity of a refined Fitts' Law.

    Bützler, Jennifer; Vetter, Sebastian; Jochems, Nicole; Schlick, Christopher M


    On the basis of three empirical studies Fitts' Law was refined for bivariate pointing tasks on large touch screens. In the first study different target width parameters were investigated. The second study considered the effect of the motion angle. Based on the results of the two studies a refined model for movement time in human-computer interaction was formulated. A third study, which is described here in detail, concerns the validation of the refined model. For the validation study 20 subjects had to execute a bivariate pointing task on a large touch screen. In the experimental task 250 rectangular target objects were displayed at a randomly chosen position on the screen covering a broad range of ID values (ID= [1.01; 4.88]). Compared to existing refinements of Fitts' Law, the new model shows highest predictive validity. A promising field of application of the model is the ergonomic design and evaluation of project management software. By using the refined model, software designers can calculate a priori the appropriate angular position and the size of buttons, menus or icons.

  19. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research


    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  20. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Qi, Wei; Liu, Junguo


    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  1. A bivariate measurement error model for semicontinuous and continuous variables: Application to nutritional epidemiology.

    Kipnis, Victor; Freedman, Laurence S; Carroll, Raymond J; Midthune, Douglas


    Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.

  2. A bivariate space-time downscaler under space and time misalignment.

    Berrocal, Veronica J; Gelfand, Alan E; Holland, David M


    Ozone and particulate matter PM(2.5) are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants come from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model based approach for fusing these two sources of information for the pair of co-pollutants which is computationally feasible over large spatial regions and long periods of time. Due to the association between concentration levels of the two environmental contaminants, it is expected that information regarding one will help to improve prediction of the other. Misalignment is an obvious issue since the monitoring networks for the two contaminants only partly intersect and because the collection rate for PM(2.5) is typically less frequent than that for ozone.Extending previous work in Berrocal et al. (2009), we introduce a bivariate downscaler that provides a flexible class of bivariate space-time assimilation models. We discuss computational issues for model fitting and analyze a dataset for ozone and PM(2.5) for the ozone season during year 2002. We show a modest improvement in predictive performance, not surprising in a setting where we can anticipate only a small gain.

  3. A method of moments to estimate bivariate survival functions: the copula approach

    Silvia Angela Osmetti


    Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.

  4. Xp21 contiguous gene syndromes: Deletion quantitation with bivariate flow karyotyping allows mapping of patient breakpoints

    McCabe, E.R.B.; Towbin, J.A. (Baylor College of Medicine, Houston, TX (United States)); Engh, G. van den; Trask, B.J. (Lawrence Livermore National Lab., CA (United States))


    Bivariate flow karyotyping was used to estimate the deletion sizes for a series of patients with Xp21 contiguous gene syndromes. The deletion estimates were used to develop an approximate scale for the genomic map in Xp21. The bivariate flow karyotype results were compared with clinical and molecular genetic information on the extent of the patients' deletions, and these various types of data were consistent. The resulting map spans >15 Mb, from the telomeric interval between DXS41 (99-6) and DXS68 (1-4) to a position centromeric to the ornithine transcarbamylase locus. The deletion sizing was considered to be accurate to [plus minus]1 Mb. The map provides information on the relative localization of genes and markers within this region. For example, the map suggests that the adrenal hypoplasia congenita and glycerol kinase genes are physically close to each other, are within 1-2 Mb of the telomeric end of the Duchenne muscular dystrophy (DMD) gene, and are nearer to the DMD locus than to the more distal marker DXS28 (C7). Information of this type is useful in developing genomic strategies for positional cloning in Xp21. These investigations demonstrate that the DNA from patients with Xp21 contiguous gene syndromes can be valuable reagents, not only for ordering loci and markers but also for providing an approximate scale to the map of the Xp21 region surrounding DMD. 44 refs., 3 figs.

  5. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.


    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  6. Recurrent major depression and right hippocampal volume: A bivariate linkage and association study.

    Mathias, Samuel R; Knowles, Emma E M; Kent, Jack W; McKay, D Reese; Curran, Joanne E; de Almeida, Marcio A A; Dyer, Thomas D; Göring, Harald H H; Olvera, Rene L; Duggirala, Ravi; Fox, Peter T; Almasy, Laura; Blangero, John; Glahn, David C


    Previous work has shown that the hippocampus is smaller in the brains of individuals suffering from major depressive disorder (MDD) than those of healthy controls. Moreover, right hippocampal volume specifically has been found to predict the probability of subsequent depressive episodes. This study explored the utility of right hippocampal volume as an endophenotype of recurrent MDD (rMDD). We observed a significant genetic correlation between the two traits in a large sample of Mexican American individuals from extended pedigrees (ρg = -0.34, p = 0.013). A bivariate linkage scan revealed a significant pleiotropic quantitative trait locus on chromosome 18p11.31-32 (LOD = 3.61). Bivariate association analysis conducted under the linkage peak revealed a variant (rs574972) within an intron of the gene SMCHD1 meeting the corrected significance level (χ(2) = 19.0, p = 7.4 × 10(-5)). Univariate association analyses of each phenotype separately revealed that the same variant was significant for right hippocampal volume alone, and also revealed a suggestively significant variant (rs12455524) within the gene DLGAP1 for rMDD alone. The results implicate right-hemisphere hippocampal volume as a possible endophenotype of rMDD, and in so doing highlight a potential gene of interest for rMDD risk. © 2015 Wiley Periodicals, Inc.

  7. A Spectral Unmixing Model for the Integration of Multi-Sensor Imagery: A Tool to Generate Consistent Time Series Data

    Georgia Doxani


    Full Text Available The Sentinel missions have been designed to support the operational services of the Copernicus program, ensuring long-term availability of data for a wide range of spectral, spatial and temporal resolutions. In particular, Sentinel-2 (S-2 data with improved high spatial resolution and higher revisit frequency (five days with the pair of satellites in operation will play a fundamental role in recording land cover types and monitoring land cover changes at regular intervals. Nevertheless, cloud coverage usually hinders the time series availability and consequently the continuous land surface monitoring. In an attempt to alleviate this limitation, the synergistic use of instruments with different features is investigated, aiming at the future synergy of the S-2 MultiSpectral Instrument (MSI and Sentinel-3 (S-3 Ocean and Land Colour Instrument (OLCI. To that end, an unmixing model is proposed with the intention of integrating the benefits of the two Sentinel missions, when both in orbit, in one composite image. The main goal is to fill the data gaps in the S-2 record, based on the more frequent information of the S-3 time series. The proposed fusion model has been applied on MODIS (MOD09GA L2G and SPOT4 (Take 5 data and the experimental results have demonstrated that the approach has high potential. However, the different acquisition characteristics of the sensors, i.e. illumination and viewing geometry, should be taken into consideration and bidirectional effects correction has to be performed in order to reduce noise in the reflectance time series.

  8. Remote sensing of particle dynamics: a two-component unmixing model in a western UK shelf sea.

    Mitchell, Catherine; Cunningham, Alex


    The relationship between the backscattering and absorption coefficients, in particular the backscattering to absorption ratio, is mediated by the type of particles present in the water column. By considering the optical signals to be driven by phytoplankton and suspended minerals, with a relatively constant influence from CDOM, radiative transfer modelling is used to propose a method for retrieving the optical contribution of phytoplankton and suspended minerals to the total absorption coefficient with mean percentage errors of below 5% for both components. These contributions can be converted to constituent concentrations if the appropriate specific inherent optical properties are known or can be determined from the maximum and minimum backscattering to absorption ratios of the data. Remotely sensed absorption and backscattering coefficients from eight years of MODIS data for the Irish Sea reveal maximum backscattering to absorption coefficient ratios over the winter (with an average for the region of 0.27), which then decrease to a minimum over the summer months (with an average of 0.06) before increasing again through to winter, indicating a change in the particles present in the water column. Application of the two-component unmixing model to this data showed seasonal cycles of both phytoplankton and suspended mineral concentrations which vary in both amplitude and periodicity depending on their location. For example, in the Bristol Channel the amplitude of the suspended mineral concentration throughout one cycle is approximately 75% greater than a yearly cycle in the eastern Irish Sea. These seasonal cycles give an insight into the complex dynamics of particles in the water column, indicating the suspension of sediment throughout the winter months and the loss of sediments from the surface layer over the summer during stratification. The relationship between the timing of the phytoplankton spring bloom and changes in the availability of light in the water

  9. Application of grafted polynomial function in forecasting cotton ...

    A study was conducted to forecast cotton production trend with the application of a grafted polynomial function in Nigeria from 1985 through 2013. Grafted models are used in econometrics to embark on economic analysis involving time series. In economic time series, the paucity of data and their availability has always ...

  10. A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    Takano, Y.; Sotirov, R.


    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  11. On Dual Gabor Frame Pairs Generated by Polynomials

    Christensen, Ole; Rae Young, Kim


    We provide explicit constructions of particularly convenient dual pairs of Gabor frames. We prove that arbitrary polynomials restricted to sufficiently large intervals will generate Gabor frames, at least for small modulation parameters. Unfortunately, no similar function can generate a dual Gabo...

  12. Learning Mixtures of Polynomials of Conditional Densities from Data

    L. López-Cruz, Pedro; Nielsen, Thomas Dyhre; Bielza, Concha


    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian networks with continuous and discrete variables. We propose two methods for learning MoP ap- proximations of conditional densities from data. Both approaches are based on learning MoP approximatio...

  13. Root and critical point behaviors of certain sums of polynomials

    Seon-Hong Kim


    Apr 24, 2018 ... Root and critical point behaviors of certain sums of polynomials. SEON-HONG KIM1,∗. , SUNG YOON KIM2, TAE HYUNG KIM2 and SANGHEON LEE2. 1Department of Mathematics, Sookmyung Women's University, Seoul 140-742, Korea. 2Gyeonggi Science High School, Suwon 440-800, Korea.

  14. Computational Technique for Teaching Mathematics (CTTM): Visualizing the Polynomial's Resultant

    Alves, Francisco Regis Vieira


    We find several applications of the Dynamic System Geogebra--DSG related predominantly to the basic mathematical concepts at the context of the learning and teaching in Brasil. However, all these works were developed in the basic level of Mathematics. On the other hand, we discuss and explore, with DSG's help, some applications of the polynomial's…

  15. Polynomial modal analysis of lamellar diffraction gratings in conical mounting.

    Randriamihaja, Manjakavola Honore; Granet, Gérard; Edee, Kofi; Raniriharinosy, Karyl


    An efficient numerical modal method for modeling a lamellar grating in conical mounting is presented. Within each region of the grating, the electromagnetic field is expanded onto Legendre polynomials, which allows us to enforce in an exact manner the boundary conditions that determine the eigensolutions. Our code is successfully validated by comparison with results obtained with the analytical modal method.

  16. QCD analysis of structure functions in terms of Jacobi polynomials

    Krivokhizhin, V.G.; Kurlovich, S.P.; Savin, I.A.; Sidorov, A.V.; Skachkov, N.B.; Sanadze, V.V.


    A new method of QCD-analysis of singlet and nonsinglet structure functions based on their expansion in orthogonal Jacobi polynomials is proposed. An accuracy of the method is studied and its application is demonstrated using the structure function F 2 (x,Q 2 ) obtained by the EMC Collaboration from measurements with an iron target. (orig.)

  17. Representations for the extreme zeros of orthogonal polynomials

    van Doorn, Erik A.; van Foreest, Nicky D.; Zeifman, Alexander I.


    We establish some representations for the smallest and largest zeros of orthogonal polynomials in terms of the parameters in the three-terms recurrence relation. As a corollary we obtain representations for the endpoints of the true interval of orthogonality. Implications of these results for the

  18. Superiority of Bessel function over Zernicke polynomial as base ...

    Abstract. Here we describe the superiority of Bessel function as base function for radial expan- sion over Zernicke polynomial in the tomographic reconstruction technique. The causes for the superiority have been described in detail. The superiority has been shown both with simulated data for Kadomtsev's model for ...

  19. Simplified polynomial representation of cross sections for reactor calculation

    Dias, A.M.; Sakai, M.


    It is shown a simplified representation of a cross section library generated by transport theory using the cell model of Wigner-Seitz for typical PWR fuel elements. The effect of burnup evolution through tables of reference cross sections and the effect of the variation of the reactor operation parameters considered by adjusted polynomials are presented. (M.C.K.) [pt

  20. A fast numerical test of multivariate polynomial positiveness with applications

    Augusta, Petr; Augustová, Petra


    Roč. 54, č. 2 (2018), s. 289-303 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : stability * multidimensional systems * positive polynomials * fast Fourier transforms * numerical algorithm Subject RIV: BC - Control Systems Theory OBOR OECD: Automation and control systems Impact factor: 0.379, year: 2016

  1. Computing Tutte polynomials of contact networks in classrooms

    Hincapié, Doracelly; Ospina, Juan


    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  2. Fast Parallel Computation of Polynomials Using Few Processors

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.


    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  3. Mirror symmetry, toric branes and topological string amplitudes as polynomials

    Alim, Murad


    The central theme of this thesis is the extension and application of mirror symmetry of topological string theory. The contribution of this work on the mathematical side is given by interpreting the calculated partition functions as generating functions for mathematical invariants which are extracted in various examples. Furthermore the extension of the variation of the vacuum bundle to include D-branes on compact geometries is studied. Based on previous work for non-compact geometries a system of differential equations is derived which allows to extend the mirror map to the deformation spaces of the D-Branes. Furthermore, these equations allow the computation of the full quantum corrected superpotentials which are induced by the D-branes. Based on the holomorphic anomaly equation, which describes the background dependence of topological string theory relating recursively loop amplitudes, this work generalizes a polynomial construction of the loop amplitudes, which was found for manifolds with a one dimensional space of deformations, to arbitrary target manifolds with arbitrary dimension of the deformation space. The polynomial generators are determined and it is proven that the higher loop amplitudes are polynomials of a certain degree in the generators. Furthermore, the polynomial construction is generalized to solve the extension of the holomorphic anomaly equation to D-branes without deformation space. This method is applied to calculate higher loop amplitudes in numerous examples and the mathematical invariants are extracted. (orig.)

  4. Riesz transforms and Lie groups of polynomial growth

    Elst, ter A.F.M.; Robinson, D.W.; Sikora, A.


    Let G be a Lie group of polynomial growth. We prove that the second-order Riesz transforms onL2(G; dg) are bounded if, and only if, the group is a direct product of a compact group and a nilpotent group, in which case the transforms of all orders are bounded.

  5. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    Freund, Roland


    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  6. Polynomial constitutive model for shape memory and pseudo elasticity

    Savi, M.A.; Kouzak, Z.


    This paper reports an one-dimensional phenomenological constitutive model for shape memory and pseudo elasticity using a polynomial expression for the free energy which is based on the classical Devonshire theory. This study identifies the main characteristics of the classical theory and introduces a simple modification to obtain better results. (author). 9 refs., 6 figs

  7. Weighted Polynomial Approximation for Automated Detection of Inspiratory Flow Limitation

    Sheng-Cheng Huang


    Full Text Available Inspiratory flow limitation (IFL is a critical symptom of sleep breathing disorders. A characteristic flattened flow-time curve indicates the presence of highest resistance flow limitation. This study involved investigating a real-time algorithm for detecting IFL during sleep. Three categories of inspiratory flow shape were collected from previous studies for use as a development set. Of these, 16 cases were labeled as non-IFL and 78 as IFL which were further categorized into minor level (20 cases and severe level (58 cases of obstruction. In this study, algorithms using polynomial functions were proposed for extracting the features of IFL. Methods using first- to third-order polynomial approximations were applied to calculate the fitting curve to obtain the mean absolute error. The proposed algorithm is described by the weighted third-order (w.3rd-order polynomial function. For validation, a total of 1,093 inspiratory breaths were acquired as a test set. The accuracy levels of the classifications produced by the presented feature detection methods were analyzed, and the performance levels were compared using a misclassification cobweb. According to the results, the algorithm using the w.3rd-order polynomial approximation achieved an accuracy of 94.14% for IFL classification. We concluded that this algorithm achieved effective automatic IFL detection during sleep.

  8. A Genetic algorithm for evaluating the zeros (roots) of polynomial ...

    This paper presents a Genetic Algorithm software (which is a computational, search technique) for finding the zeros (roots) of any given polynomial function, and optimizing and solving N-dimensional systems of equations. The software is particularly useful since most of the classic schemes are not all embracing.

  9. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    Buzzard, Gregery T.


    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  10. Simplified polynomial digital predistortion for multimode software defined radios

    Kardaras, Georgios; Soler, José; Dittmann, Lars


    a simplified approach using polynomial digital predistortion in the intermediated frequency (IF) domain. It is fully implementable in software and no hardware changes are required on the digital or analog platform. The adaptation algorithm selected was Least Mean Squares because of its relevant simplicity...

  11. Polynomial kernels for deletion to classes of acyclic digraphs

    Mnich, Matthias; van Leeuwen, E.J.


    We consider the problem to find a set X of vertices (or arcs) with |X| ≤ k in a given digraph G such that D = G − X is an acyclic digraph. In its generality, this is Directed Feedback Vertex Set (or Directed Feedback Arc Set); the existence of a polynomial kernel for these problems is a notorious

  12. Lie-theoretic generating relations of two variable Laguerre polynomials

    Khan, Subuhi; Yasmin, Ghazala


    Generating relations involving two variable Lagneire polynonuals L n (x, y) are derived. The process involves the construction of a three dimensional Lie algebra isomorphic to special linear algebra sl(2) with the help of Weisner's method by giving suitable interpretations to the index n of the polynomials L n (x, y). (author)

  13. Differentiation by integration using orthogonal polynomials, a survey

    Diekema, E.; Koornwinder, T.H.


    This survey paper discusses the history of approximation formulas for n-th order derivatives by integrals involving orthogonal polynomials. There is a large but rather disconnected corpus of literature on such formulas. We give some results in greater generality than in the literature. Notably we

  14. Tsallis p, q-deformed Touchard polynomials and Stirling numbers

    Herscovici, O.; Mansour, T.


    In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.

  15. Optimum short-time polynomial regression for signal analysis

    A Sreenivasa Murthy

    the Proceedings of European Signal Processing Conference. (EUSIPCO) 2008. ... In a seminal paper, Savitzky and Golay [4] showed that short-time polynomial modeling is ...... We next consider a linearly frequency-modulated chirp with an exponentially .... 1

  16. on the performance of Autoregressive Moving Average Polynomial

    Timothy Ademakinwa

    estimated using least squares and Newton Raphson iterative methods. To determine the order of the ... r is the degree of polynomial while j is the number of lag of the ..... use a real time series dataset, monthly rainfall and temperature series ...

  17. Chemical Equilibrium and Polynomial Equations: Beware of Roots.

    Smith, William R.; Missen, Ronald W.


    Describes two easily applied mathematical theorems, Budan's rule and Rolle's theorem, that in addition to Descartes's rule of signs and intermediate-value theorem, are useful in chemical equilibrium. Provides examples that illustrate the use of all four theorems. Discusses limitations of the polynomial equation representation of chemical…

  18. Explicit formulae for the generalized Hermite polynomials in superspace

    Desrosiers, Patrick; Lapointe, Luc; Mathieu, Pierre


    We provide explicit formulae for the orthogonal eigenfunctions of the supersymmetric extension of the rational Calogero-Moser-Sutherland model with harmonic confinement, i.e., the generalized Hermite (or Hi-Jack) polynomials in superspace. The construction relies on the triangular action of the Hamiltonian on the supermonomial basis. This translates into determinantal expressions for the Hamiltonian's eigenfunctions

  19. Szegö Kernels and Asymptotic Expansions for Legendre Polynomials

    Roberto Paoletti


    Full Text Available We present a geometric approach to the asymptotics of the Legendre polynomials Pk,n+1, based on the Szegö kernel of the Fermat quadric hypersurface, leading to complete asymptotic expansions holding on expanding subintervals of [-1,1].

  20. Computation of rectangular source integral by rational parameter polynomial method

    Prabha, Hem


    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively