Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise.
Dessimoz, Christophe; Gil, Manuel
2008-06-23
The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML) on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA). Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity). Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.
Maximum covariance analysis to identify intraseasonal oscillations over tropical Brazil
Barreto, Naurinete J. C.; Mesquita, Michel d. S.; Mendes, David; Spyrides, Maria H. C.; Pedra, George U.; Lucio, Paulo S.
2017-09-01
A reliable prognosis of extreme precipitation events in the tropics is arguably challenging to obtain due to the interaction of meteorological systems at various time scales. A pivotal component of the global climate variability is the so-called intraseasonal oscillations, phenomena that occur between 20 and 100 days. The Madden-Julian Oscillation (MJO), which is directly related to the modulation of convective precipitation in the equatorial belt, is considered the primary oscillation in the tropical region. The aim of this study is to diagnose the connection between the MJO signal and the regional intraseasonal rainfall variability over tropical Brazil. This is achieved through the development of an index called Multivariate Intraseasonal Index for Tropical Brazil (MITB). This index is based on Maximum Covariance Analysis (MCA) applied to the filtered daily anomalies of rainfall data over tropical Brazil against a group of covariates consisting of: outgoing longwave radiation and the zonal component u of the wind at 850 and 200 hPa. The first two MCA modes, which were used to create the { MITB}_1 and { MITB}_2 indices, represent 65 and 16 % of the explained variance, respectively. The combined multivariate index was able to satisfactorily represent the pattern of intraseasonal variability over tropical Brazil, showing that there are periods of activation and inhibition of precipitation connected with the pattern of MJO propagation. The MITB index could potentially be used as a diagnostic tool for intraseasonal forecasting.
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant Covariance Matrices
Carli, Francesca P; Pavon, Michele; Picci, Giorgio
2011-01-01
This paper deals with maximum entropy completion of partially specified block-circulant matrices. Since positive definite symmetric circulants happen to be covariance matrices of stationary periodic processes, in particular of stationary reciprocal processes, this problem has applications in signal processing, in particular to image modeling. Maximum entropy completion is strictly related to maximum likelihood estimation subject to certain conditional independence constraints. The maximum entropy completion problem for block-circulant matrices is a nonlinear problem which has recently been solved by the authors, although leaving open the problem of an efficient computation of the solution. The main contribution of this paper is to provide an efficient algorithm for computing the solution. Simulation shows that our iterative scheme outperforms various existing approaches, especially for large dimensional problems. A necessary and sufficient condition for the existence of a positive definite circulant completio...
Bai, Xian-Zong; Ma, Chao-Wei; Chen, Lei; Tang, Guo-Jin
2016-09-01
When engaging in the maximum collision probability (Pcmax) analysis for short-term conjunctions between two orbiting objects, it is important to clarify and understand the assumptions for obtaining Pcmax. Based on Chan's analytical formulae and analysis of covariance ellipse's variation of orientation, shape, and size in the two-dimensional conjunction plane, this paper proposes a clear and comprehensive analysis of maximum collision probability when considering these variables. Eight situations will be considered when calculating Pcmax according to the varied orientation, shape, and size of the covariance ellipse. Three of the situations are not practical or meaningful; the remaining ones were completely or partially discussed in some of the previous works. These situations are discussed with uniform definitions and symbols and they are derived independently in this paper. The consequences are compared and validated by the results from previous works. Finally, a practical conjunction event is presented as a test case to demonstrate the effectiveness of methodology. Comparison of the Pcmax presented in this paper with the empirical results from the curve or surface calculated by numerical method indicates that the relative error of Pcmax is less than 0.0039%.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Castrillon, Julio
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
Kirkpatrick Mark
2005-01-01
Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.
Lee, Wonyul; Liu, Yufeng
2012-10-01
Multivariate regression is a common statistical tool for practical problems. Many multivariate regression techniques are designed for univariate response cases. For problems with multiple response variables available, one common approach is to apply the univariate response regression technique separately on each response variable. Although it is simple and popular, the univariate response approach ignores the joint information among response variables. In this paper, we propose three new methods for utilizing joint information among response variables. All methods are in a penalized likelihood framework with weighted L(1) regularization. The proposed methods provide sparse estimators of conditional inverse co-variance matrix of response vector given explanatory variables as well as sparse estimators of regression parameters. Our first approach is to estimate the regression coefficients with plug-in estimated inverse covariance matrices, and our second approach is to estimate the inverse covariance matrix with plug-in estimated regression parameters. Our third approach is to estimate both simultaneously. Asymptotic properties of these methods are explored. Our numerical examples demonstrate that the proposed methods perform competitively in terms of prediction, variable selection, as well as inverse covariance matrix estimation.
Maximum a posteriori covariance estimation using a power inverse wishart prior
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximu...... class of prior distributions generalizing the inverse Wishart prior, discuss its properties, and demonstrate the estimator on simulated and real data....
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Li, J.; Carlson, B. E.; Lacis, A. A.
2014-04-01
The development of remote sensing techniques has greatly advanced our knowledge of atmospheric aerosols. Various satellite sensors and the associated retrieval algorithms all add to the information of global aerosol variability, while well-designed surface networks provide time series of highly accurate measurements at specific locations. In studying the variability of aerosol properties, aerosol climate effects, and constraining aerosol fields in climate models, it is essential to make the best use of all of the available information. In the previous three parts of this series, we demonstrated the usefulness of several spectral decomposition techniques in the analysis and comparison of temporal and spatial variability of aerosol optical depth using satellite and ground-based measurements. Specifically, Principal Component Analysis (PCA) successfully captures and isolates seasonal and interannual variability from different aerosol source regions, Maximum Covariance Analysis (MCA) provides a means to verify the variability in one satellite dataset against Aerosol Robotic Network (AERONET) data, and Combined Principal Component Analysis (CPCA) realized parallel comparison among multi-satellite, multi-sensor datasets. As the final part of the study, this paper introduces a novel technique that integrates both multi-sensor datasets and ground observations, and thus effectively bridges the gap between these two types of measurements. The Combined Maximum Covariance Analysis (CMCA) decomposes the cross covariance matrix between the combined multi-sensor satellite data field and AERONET station data. We show that this new method not only confirms the seasonal and interannual variability of aerosol optical depth, aerosol source regions and events represented by different satellite datasets, but also identifies the strengths and weaknesses of each dataset in capturing the variability associated with sources, events or aerosol types. Furthermore, by examining the spread of
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
Manifestly covariant electromagnetism
Hillion, P. [Institut Henri Poincare' , Le Vesinet (France)
1999-03-01
The conventional relativistic formulation of electromagnetism is covariant under the full Lorentz group. But relativity requires covariance only under the proper Lorentz group and the authors present here the formalism covariant under the complex rotation group isomorphic to the proper Lorentz group. The authors discuss successively Maxwell's equations, constitutive relations and potential functions. A comparison is made with the usual formulation.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Covariant Hamiltonian field theory
Giachetta, G; Sardanashvily, G
1999-01-01
We study the relationship between the equations of first order Lagrangian field theory on fiber bundles and the covariant Hamilton equations on the finite-dimensional polysymplectic phase space of covariant Hamiltonian field theory. The main peculiarity of these Hamilton equations lies in the fact that, for degenerate systems, they contain additional gauge fixing conditions. We develop the BRST extension of the covariant Hamiltonian formalism, characterized by a Lie superalgebra of BRST and anti-BRST symmetries.
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
Frasinski, Leszek J.
2016-08-01
Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.
Covariant Bardeen perturbation formalism
Vitenti, S. D. P.; Falciano, F. T.; Pinto-Neto, N.
2014-05-01
In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so-called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors, which yields an adequate language to treat both perturbative approaches in a common framework. We then stress that in the referred covariant approach, one necessarily introduces an additional hypersurface choice to the problem. Using our mixed and pure tensors approach, we are able to construct a one-to-one map relating the usual gauge dependence of the Bardeen formalism with the hypersurface dependence inherent to the covariant approach. Finally, through the use of this map, we define full nonlinear tensors that at first order correspond to the three known gauge invariant variables Φ, Ψ and Ξ, which are simultaneously foliation and gauge invariant. We then stress that the use of the proposed mixed tensors allows one to construct simultaneously gauge and hypersurface invariant variables at any order.
Covariant canonical quantization
Hippel, G.M. von [University of Regina, Department of Physics, Regina, Saskatchewan (Canada); Wohlfarth, M.N.R. [Universitaet Hamburg, Institut fuer Theoretische Physik, Hamburg (Germany)
2006-09-15
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. This procedure agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and we apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses. Covariant canonical quantization can thus be understood as a ''first'' or pre-quantization within the framework of conventional QFT. (orig.)
Covariant canonical quantization
Von Hippel, G M; Hippel, Georg M. von; Wohlfarth, Mattias N.R.
2006-01-01
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. Covariant canonical quantization agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses.
Covariance Applications with Kiwi
Mattoon, C. M.; Brown, D.; Elliott, J. B.
2012-05-01
The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Covariance Applications with Kiwi
Elliott J.B.
2012-05-01
Full Text Available The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL is developing a new tool, named ‘Kiwi’, that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL and large-scale Uncertainty Quantification (UQ studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Saltas, Ippocratis D
2016-01-01
We derive the 1-loop effective action of the cubic Galileon coupled to quantum-gravitational fluctuations in a background and gauge-independent manner, employing the covariant framework of DeWitt and Vilkovisky. Although the bare action respects shift symmetry, the coupling to gravity induces an effective mass to the scalar, of the order of the cosmological constant, as a direct result of the non-flat field-space metric, the latter ensuring the field-reparametrization invariance of the formalism. Within a gauge-invariant regularization scheme, we discover novel, gravitationally induced non-Galileon higher-derivative interactions in the effective action. These terms, previously unnoticed within standard, non-covariant frameworks, are not Planck suppressed. Unless tuned to be sub-dominant, their presence could have important implications for the classical and quantum phenomenology of the theory.
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Using Analysis of Covariance (ANCOVA) with Fallible Covariates
Culpepper, Steven Andrew; Aguinis, Herman
2011-01-01
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…
Using Analysis of Covariance (ANCOVA) with Fallible Covariates
Culpepper, Steven Andrew; Aguinis, Herman
2011-01-01
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Covariant Magnetic Connection Hypersurfaces
Pegoraro, F
2016-01-01
In the single fluid, nonrelativistic, ideal-Magnetohydrodynamic (MHD) plasma description magnetic field lines play a fundamental role by defining dynamically preserved "magnetic connections" between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D {\\it magnetic connection hypersurfaces} in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when ${\\bf E} \\cdot {\\bf B} = 0$.
Universality of Covariance Matrices
Pillai, Natesh S
2011-01-01
We prove the universality of covariance matrices of the form $H_{N \\times N} = {1 \\over N} \\tp{X}X$ where $[X]_{M \\times N}$ is a rectangular matrix with independent real valued entries $[x_{ij}]$ satisfying $\\E \\,x_{ij} = 0$ and $\\E \\,x^2_{ij} = {1 \\over M}$, $N, M\\to \\infty$. Furthermore it is assumed that these entries have sub-exponential tails. We will study the asymptotics in the regime $N/M = d_N \\in (0,\\infty), \\lim_{N\\to \\infty}d_N \
Covariant Projective Extensions
许天周; 梁洁
2003-01-01
@@ The theory of crossed products of C*-algebras by groups of automorphisms is a well-developed area of the theory of operator algebras. Given the importance and the success ofthat theory, it is natural to attempt to extend it to a more general situation by, for example,developing a theory of crossed products of C*-algebras by semigroups of automorphisms, or evenof endomorphisms. Indeed, in recent years a number of papers have appeared that are concernedwith such non-classicaltheories of covariance algebras, see, for instance [1-3].
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Land, M C
2001-01-01
This paper examines the Stark effect, as a first order perturbation of manifestly covariant hydrogen-like bound states. These bound states are solutions to a relativistic Schr\\"odinger equation with invariant evolution parameter, and represent mass eigenstates whose eigenvalues correspond to the well-known energy spectrum of the non-relativistic theory. In analogy to the nonrelativistic case, the off-diagonal perturbation leads to a lifting of the degeneracy in the mass spectrum. In the covariant case, not only do the spectral lines split, but they acquire an imaginary part which is lnear in the applied electric field, thus revealing induced bound state decay in first order perturbation theory. This imaginary part results from the coupling of the external field to the non-compact boost generator. In order to recover the conventional first order Stark splitting, we must include a scalar potential term. This term may be understood as a fifth gauge potential, which compensates for dependence of gauge transformat...
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
Hubeny, Veronika E
2014-01-01
A recently explored interesting quantity in AdS/CFT, dubbed 'residual entropy', characterizes the amount of collective ignorance associated with either boundary observers restricted to finite time duration, or bulk observers who lack access to a certain spacetime region. However, the previously-proposed expression for this quantity involving variation of boundary entanglement entropy (subsequently renamed to 'differential entropy') works only in a severely restrictive context. We explain the key limitations, arguing that in general, differential entropy does not correspond to residual entropy. Given that the concept of residual entropy as collective ignorance transcends these limitations, we identify two correspondingly robust, covariantly-defined constructs: a 'strip wedge' associated with boundary observers and a 'rim wedge' associated with bulk observers. These causal sets are well-defined in arbitrary time-dependent asymptotically AdS spacetimes in any number of dimensions. We discuss their relation, spec...
Deriving covariant holographic entanglement
Dong, Xi; Lewkowycz, Aitor; Rangamani, Mukund
2016-11-01
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Deriving covariant holographic entanglement
Dong, Xi; Rangamani, Mukund
2016-01-01
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Renyi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Covariant Macroscopic Quantum Geometry
Hogan, Craig J
2012-01-01
A covariant noncommutative algebra of position operators is presented, and interpreted as the macroscopic limit of a geometry that describes a collective quantum behavior of the positions of massive bodies in a flat emergent space-time. The commutator defines a quantum-geometrical relationship between world lines that depends on their separation and relative velocity, but on no other property of the bodies, and leads to a transverse uncertainty of the geometrical wave function that increases with separation. The number of geometrical degrees of freedom in a space-time volume scales holographically, as the surface area in Planck units. Ongoing branching of the wave function causes fluctuations in transverse position, shared coherently among bodies with similar trajectories. The theory can be tested using appropriately configured Michelson interferometers.
Saltas, Ippocratis D.; Vitagliano, Vincenzo
2017-05-01
We derive the 1-loop effective action of the cubic Galileon coupled to quantum-gravitational fluctuations in a background and gauge-independent manner, employing the covariant framework of DeWitt and Vilkovisky. Although the bare action respects shift symmetry, the coupling to gravity induces an effective mass to the scalar, of the order of the cosmological constant, as a direct result of the nonflat field-space metric, the latter ensuring the field-reparametrization invariance of the formalism. Within a gauge-invariant regularization scheme, we discover novel, gravitationally induced non-Galileon higher-derivative interactions in the effective action. These terms, previously unnoticed within standard, noncovariant frameworks, are not Planck suppressed. Unless tuned to be subdominant, their presence could have important implications for the classical and quantum phenomenology of the theory.
Covariant holographic entanglement negativity
Chaturvedi, Pankaj; Sengupta, Gautam
2016-01-01
We conjecture a holographic prescription for the covariant entanglement negativity of $d$-dimensional conformal field theories dual to non static bulk $AdS_{d+1}$ gravitational configurations in the framework of the $AdS/CFT$ correspondence. Application of our conjecture to a $AdS_3/CFT_2$ scenario involving bulk rotating BTZ black holes exactly reproduces the entanglement negativity of the corresponding $(1+1)$ dimensional conformal field theories and precisely captures the distillable quantum entanglement. Interestingly our conjecture for the scenario involving dual bulk extremal rotating BTZ black holes also accurately leads to the entanglement negativity for the chiral half of the corresponding $(1+1)$ dimensional conformal field theory at zero temperature.
Bayes linear covariance matrix adjustment
Wilkinson, Darren J
1995-01-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be a...
Covariant electromagnetic field lines
Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.
2017-08-01
Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.
The Performance Analysis Based on SAR Sample Covariance Matrix
Esra Erten
2012-03-01
Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.
Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins
Tolwinski-Ward, S. E.; Wang, D.
2015-12-01
Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.
Covariant representations of subproduct systems
Viselter, Ami
2010-01-01
A celebrated theorem of Pimsner states that a covariant representation $T$ of a $C^*$-correspondence $E$ extends to a $C^*$-representation of the Toeplitz algebra of $E$ if and only if $T$ is isometric. This paper is mainly concerned with finding conditions for a covariant representation of a \\emph{subproduct system} to extend to a $C^*$-representation of the Toeplitz algebra. This framework is much more general than the former. We are able to find sufficient conditions, and show that in important special cases, they are also necessary. Further results include the universality of the tensor algebra, dilations of completely contractive covariant representations, Wold decompositions and von Neumann inequalities.
Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes
Zimmermann, Ralf
2010-01-01
The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood......: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training...
General covariance in computational electrodynamics
Shyroki, Dzmitry; Lægsgaard, Jesper; Bang, Ole;
2007-01-01
We advocate the generally covariant formulation of Maxwell equations as underpinning some recent advances in computational electrodynamics—in the dimensionality reduction for separable structures; in mesh truncation for finite-difference computations; and in adaptive coordinate mapping as opposed...
Calcul Stochastique Covariant à Sauts & Calcul Stochastique à Sauts Covariants
Maillard-Teyssier, Laurence
2003-01-01
We propose a stochastic covariant calculus forcàdlàg semimartingales in the tangent bundle $TM$ over a manifold $M$. A connection on $M$ allows us to define an intrinsic derivative ofa $C^1$ curve $(Y_t)$ in $TM$, the covariantderivative. More precisely, it is the derivative of$(Y_t)$ seen in a frame moving parallelly along its projection curve$(x_t)$ on $M$. With the transfer principle, Norris defined thestochastic covariant integration along a continuous semimartingale in$TM$. We describe t...
Covariate-free and Covariate-dependent Reliability.
Bentler, Peter M
2016-12-01
Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.
Levy Matrices and Financial Covariances
Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail
2003-10-01
In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S
2016-01-01
We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Multivariate covariance generalized linear models
Bonat, W. H.; Jørgensen, Bent
2016-01-01
We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions...
Covariance evaluation work at LANL
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Young, Phillip [Los Alamos National Laboratory; Hale, Gerald [Los Alamos National Laboratory; Chadwick, M B [Los Alamos National Laboratory; Little, R C [Los Alamos National Laboratory
2008-01-01
Los Alamos evaluates covariances for nuclear data library, mainly for actinides above the resonance regions and light elements in the enUre energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for the criticality safety study and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Cosmic Censorship Conjecture revisited: Covariantly
Hamid, Aymen I M; Maharaj, Sunil D
2014-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general Locally Rotationally Symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible.
Stochastic precipitation generator with hidden state covariates
Kim, Yongku; Lee, GyuWon
2017-08-01
Time series of daily weather such as precipitation, minimum temperature and maximum temperature are commonly required for various fields. Stochastic weather generators constitute one of the techniques to produce synthetic daily weather. The recently introduced approach for stochastic weather generators is based on generalized linear modeling (GLM) with covariates to account for seasonality and teleconnections (e.g., with the El Niño). In general, stochastic weather generators tend to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this overdispersion, we incorporated time series of seasonal dry/wet indicators in the GLM weather generator as covariates. These seasonal time series were local (or global) decodings obtained by a hidden Markov model of seasonal total precipitation and implemented in the weather generator. The proposed method is applied to time series of daily weather from Seoul, Korea and Pergamino, Argentina. This method provides a straightforward translation of the uncertainty of the seasonal forecast to the corresponding conditional daily weather statistics.
Covariant description of isothermic surfaces
Tafel, Jacek
2014-01-01
We present a covariant formulation of the Gauss-Weingarten equations and the Gauss-Mainardi-Codazzi equations for surfaces in 3-dimensional curved spaces. We derive a coordinate invariant condition on the first and second fundamental form which is necessary and sufficient for the surface to be isothermic.
Covariation Neglect among Novice Investors
Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy
2006-01-01
In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…
Covariant Formulations of Superstring Theories.
Mikovic, Aleksandar Radomir
1990-01-01
Chapter 1 contains a brief introduction to the subject of string theory, and tries to motivate the study of superstrings and covariant formulations. Chapter 2 describes the Green-Schwarz formulation of the superstrings. The Hamiltonian and BRST structure of the theory is analysed in the case of the superparticle. Implications for the superstring case are discussed. Chapter 3 describes the Siegel's formulation of the superstring, which contains only the first class constraints. It is shown that the physical spectrum coincides with that of the Green-Schwarz formulation. In chapter 4 we analyse the BRST structure of the Siegel's formulation. We show that the BRST charge has the wrong cohomology, and propose a modification, called first ilk, which gives the right cohomology. We also propose another superparticle model, called second ilk, which has infinitely many coordinates and constraints. We construct the complete BRST charge for it, and show that it gives the correct cohomology. In chapter 5 we analyse the properties of the covariant vertex operators and the corresponding S-matrix elements by using the Siegel's formulation. We conclude that the knowledge of the ghosts is necessary, even at the tree level, in order to obtain the correct S-matrix. In chapter 6 we attempt to calculate the superstring loops, in a covariant gauge. We calculate the vacuum-to -vacuum amplitude, which is also the cosmological constant. We show that it vanishes to all loop orders, under the assumption that the free covariant gauge-fixed action exists. In chapter 7 we present our conclusions, and briefly discuss the random lattice approach to the string theory, as a possible way of resolving the problem of the covariant quantization and the nonperturbative definition of the superstrings.
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Discrete Symmetries in Covariant LQG
Rovelli, Carlo
2012-01-01
We study time-reversal and parity ---on the physical manifold and in internal space--- in covariant loop gravity. We consider a minor modification of the Holst action which makes it transform coherently under such transformations. The classical theory is not affected but the quantum theory is slightly different. In particular, the simplicity constraints are slightly modified and this restricts orientation flips in a spinfoam to occur only across degenerate regions, thus reducing the sources of potential divergences.
Phenotypic covariance at species’ borders
2013-01-01
Background Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species’ borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Results Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Conclusions Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species’ borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future. PMID:23714580
Competing risks and time-dependent covariates
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates...
Covariance Evaluation Methodology for Neutron Cross Sections
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Covariant diagrams for one-loop matching
Zhang, Zhengkang
2016-01-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed "covariant diagrams." The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
ISSUES IN NEUTRON CROSS SECTION COVARIANCES
Mattoon, C.M.; Oblozinsky,P.
2010-04-30
We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.
Parameter inference with estimated covariance matrices
Sellentin, Elena
2015-01-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalising over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate $t$-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalisation over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
ISSUES IN NEUTRON CROSS SECTION COVARIANCES
Mattoon, C.M.; Oblozinsky,P.
2010-04-30
We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.
Treatment Effects with Many Covariates and Heteroskedasticity
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results are obtai......The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...... then propose a new heteroskedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroskedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: (i) parametric linear models with many covariates, (ii...
Eigenvalue variance bounds for covariance matrices
Dallaporta, Sandrine
2013-01-01
This work is concerned with finite range bounds on the variance of individual eigenvalues of random covariance matrices, both in the bulk and at the edge of the spectrum. In a preceding paper, the author established analogous results for Wigner matrices and stated the results for covariance matrices. They are proved in the present paper. Relying on the LUE example, which needs to be investigated first, the main bounds are extended to complex covariance matrices by means of the Tao, Vu and Wan...
Sparse Inverse Covariance Selection via Alternating Linearization Methods
Scheinberg, Katya; Goldfarb, Donald
2010-01-01
Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an $\\ell_1$-regularization term. In this paper, we propose a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions. Moreover, our algorithm obtains an $\\epsilon$-optimal solution in $O(1/\\epsilon)$ iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Covariance structure models of expectancy.
Henderson, M J; Goldman, M S; Coovert, M D; Carnevalla, N
1994-05-01
Antecedent variables under the broad categories of genetic, environmental and cultural influences have been linked to the risk for alcohol abuse. Such risk factors have not been shown to result in high correlations with alcohol consumption and leave unclear an understanding of the mechanism by which these variables lead to increased risk. This study employed covariance structure modeling to examine the mediational influence of stored information in memory about alcohol, alcohol expectancies in relation to two biologically and environmentally driven antecedent variables, family history of alcohol abuse and a sensation-seeking temperament in a college population. We also examined the effect of criterion contamination on the relationship between sensation-seeking and alcohol consumption. Results indicated that alcohol expectancy acts as a significant, partial mediator of the relationship between sensation-seeking and consumption, that family history of alcohol abuse is not related to drinking outcome and that overlap in items on sensation-seeking and alcohol consumption measures may falsely inflate their relationship.
On the Origin of Gravitational Lorentz Covariance
Khoury, Justin; Tolley, Andrew J
2013-01-01
We provide evidence that general relativity is the unique spatially covariant effective field theory of the transverse, traceless graviton degrees of freedom. The Lorentz covariance of general relativity, having not been assumed in our analysis, is thus plausibly interpreted as an accidental or emergent symmetry of the gravitational sector.
COVARIATION BIAS AND THE RETURN OF FEAR
de Jong, Peter; VANDENHOUT, MA; MERCKELBACH, H
1995-01-01
Several studies have indicated that phobic fear is accompanied by a covariation bias, i.e. that phobic Ss tend to overassociate fear relevant stimuli and aversive outcomes. Such a covariation bias seems to be a fairly direct and powerful way to confirm danger expectations and enhance fear. Therefore
Covariant derivative of fermions and all that
Shapiro, Ilya L
2016-01-01
We present detailed pedagogical derivation of covariant derivative of fermions and some related expressions, including commutator of covariant derivatives and energy-momentum tensor of a free Dirac field. The text represents a part of the initial chapter of a one-semester course on semiclassical gravity.
Covariant diagrams for one-loop matching
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
On the regularity of the covariance matrix of a discretized scalar field on the sphere
Bilbao-Ahedo, J. D.; Barreiro, R. B.; Herranz, D.; Vielva, P.; Martínez-González, E.
2017-02-01
We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizations that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Enveloping Spectral Surfaces: Covariate Dependent Spectral Analysis of Categorical Time Series.
Krafty, Robert T; Xiong, Shuangyan; Stoffer, David S; Buysse, Daniel J; Hall, Martica
2012-09-01
Motivated by problems in Sleep Medicine and Circadian Biology, we present a method for the analysis of cross-sectional categorical time series collected from multiple subjects where the effect of static continuous-valued covariates is of interest. Toward this goal, we extend the spectral envelope methodology for the frequency domain analysis of a single categorical process to cross-sectional categorical processes that are possibly covariate dependent. The analysis introduces an enveloping spectral surface for describing the association between the frequency domain properties of qualitative time series and covariates. The resulting surface offers an intuitively interpretable measure of association between covariates and a qualitative time series by finding the maximum possible conditional power at a given frequency from scalings of the qualitative time series conditional on the covariates. The optimal scalings that maximize the power provide scientific insight by identifying the aspects of the qualitative series which have the most pronounced periodic features at a given frequency conditional on the value of the covariates. To facilitate the assessment of the dependence of the enveloping spectral surface on the covariates, we include a theory for analyzing the partial derivatives of the surface. Our approach is entirely nonparametric, and we present estimation and asymptotics in the setting of local polynomial smoothing.
The covariate-adjusted frequency plot.
Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K
2016-04-01
Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter.
Cox regression with missing covariate data using a modified partial likelihood method
Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.
2016-01-01
us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance-covariance matrix that does not involve any choice of a perturbation......Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...... function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows...
Forecasting Covariance Matrices: A Mixed Frequency Approach
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...... matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information....
Estimation of Low-Rank Covariance Function
Koltchinskii, Vladimir; Lounici, Karim; Tsybakov, Alexander B.
2015-01-01
We consider the problem of estimating a low rank covariance function $K(t,u)$ of a Gaussian process $S(t), t\\in [0,1]$ based on $n$ i.i.d. copies of $S$ observed in a white noise. We suggest a new estimation procedure adapting simultaneously to the low rank structure and the smoothness of the covariance function. The new procedure is based on nuclear norm penalization and exhibits superior performances as compared to the sample covariance function by a polynomial factor in the sample size $n$...
Covariance NMR spectroscopy by singular value decomposition.
Trbovic, Nikola; Smirnov, Serge; Zhang, Fengli; Brüschweiler, Rafael
2004-12-01
Covariance NMR is demonstrated for homonuclear 2D NMR data collected using the hypercomplex and TPPI methods. Absorption mode 2D spectra are obtained by application of the square-root operation to the covariance matrices. The resulting spectra closely resemble the 2D Fourier transformation spectra, except that they are fully symmetric with the spectral resolution along both dimensions determined by the favorable resolution achievable along omega2. An efficient method is introduced for the calculation of the square root of the covariance spectrum by applying a singular value decomposition (SVD) directly to the mixed time-frequency domain data matrix. Applications are shown for 2D NOESY and 2QF-COSY data sets and computational benchmarks are given for data matrix dimensions typically encountered in practice. The SVD implementation makes covariance NMR amenable to routine applications.
Earth Observing System Covariance Realism Updates
Ojeda Romero, Juan A.; Miguel, Fred
2017-01-01
This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.
Covariant Quantization with Extended BRST Symmetry
Geyer, B; Lavrov, P M
1999-01-01
A short rewiev of covariant quantization methods based on BRST-antiBRST symmetry is given. In particular problems of correct definition of Sp(2) symmetric quantization scheme known as triplectic quantization are considered.
Conformally covariant parametrizations for relativistic initial data
Delay, Erwann
2017-01-01
We revisit the Lichnerowicz-York method, and an alternative method of York, in order to obtain some conformally covariant systems. This type of parametrization is certainly more natural for non constant mean curvature initial data.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Covariate analysis of bivariate survival data
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Covariant action for type IIB supergravity
Sen, Ashoke
2016-07-01
Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.
Functional CLT for sample covariance matrices
Bai, Zhidong; Zhou, Wang; 10.3150/10-BEJ250
2010-01-01
Using Bernstein polynomial approximations, we prove the central limit theorem for linear spectral statistics of sample covariance matrices, indexed by a set of functions with continuous fourth order derivatives on an open interval including $[(1-\\sqrt{y})^2,(1+\\sqrt{y})^2]$, the support of the Mar\\u{c}enko--Pastur law. We also derive the explicit expressions for asymptotic mean and covariance functions.
On the covariance of residual lives
N. Unnikrishnan Nair
2007-10-01
Full Text Available Various properties of residual life such as mean, median, percentiles, variance etc have been discussed in literature on reliability and survival analysis. However a detailed study on covariance between residual lives in a two component system does not seem to have been undertaken. The present paper discusses various properties of product moment and covariance of residual lives. Relationships the product moment has with mean residual life and failure rate are studied and some characterizations are established.
Covariant Hamilton equations for field theory
Giachetta, Giovanni [Department of Mathematics and Physics, University of Camerino, Camerino (Italy); Mangiarotti, Luigi [Department of Mathematics and Physics, University of Camerino, Camerino (Italy)]. E-mail: mangiaro@camserv.unicam.it; Sardanashvily, Gennadi [Department of Theoretical Physics, Physics Faculty, Moscow State University, Moscow (Russian Federation)]. E-mail: sard@grav.phys.msu.su
1999-09-24
We study the relations between the equations of first-order Lagrangian field theory on fibre bundles and the covariant Hamilton equations on the finite-dimensional polysymplectic phase space of covariant Hamiltonian field theory. If a Lagrangian is hyperregular, these equations are equivalent. A degenerate Lagrangian requires a set of associated Hamiltonian forms in order to exhaust all solutions of the Euler-Lagrange equations. The case of quadratic degenerate Lagrangians is studied in detail. (author)
Economical Phase-Covariant Cloning of Qudits
Buscemi, F; Macchiavello, C; Buscemi, Francesco; Ariano, Giacomo Mauro D'; Macchiavello, Chiara
2004-01-01
We derive the optimal $N\\to M$ phase-covariant quantum cloning for equatorial states in dimension $d$ with $M=kd+N$, $k$ integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an ``economical'' cloning machine, which works without ancilla. The connection between optimal phase-covariant cloning and optimal multi-phase estimation is finally established.
Representations of Inverse Covariances by Differential Operators
Qin XU
2005-01-01
In the cost function of three- or four-dimensional variational data assimilation, each term is weighted by the inverse of its associated error covariance matrix and the background error covariance matrix is usually much larger than the other covariance matrices. Although the background error covariances are traditionally normalized and parameterized by simple smooth homogeneous correlation functions, the covariance matrices constructed from these correlation functions are often too large to be inverted or even manipulated. It is thus desirable to find direct representations of the inverses of background errorcorrelations. This problem is studied in this paper. In particular, it is shown that the background term can be written into ∫ dx|Dv(x)|2, that is, a squared L2 norm of a vector differential operator D, called the D-operator, applied to the field of analysis increment v(x). For autoregressive correlation functions, the Doperators are of finite orders. For Gaussian correlation functions, the D-operators are of infinite order. For practical applications, the Gaussian D-operators must be truncated to finite orders. The truncation errors are found to be small even when the Gaussian D-operators are truncated to low orders. With a truncated D-operator, the background term can be easily constructed with neither inversion nor direct calculation of the covariance matrix. D-operators are also derived for non-Gaussian correlations and transformed into non-isotropic forms.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Wishart distributions for decomposable covariance graph models
Khare, Kshitij; 10.1214/10-AOS841
2011-01-01
Gaussian covariance graph models encode marginal independence among the components of a multivariate random vector by means of a graph $G$. These models are distinctly different from the traditional concentration graph models (often also referred to as Gaussian graphical models or covariance selection models) since the zeros in the parameter are now reflected in the covariance matrix $\\Sigma$, as compared to the concentration matrix $\\Omega =\\Sigma^{-1}$. The parameter space of interest for covariance graph models is the cone $P_G$ of positive definite matrices with fixed zeros corresponding to the missing edges of $G$. As in Letac and Massam [Ann. Statist. 35 (2007) 1278--1323], we consider the case where $G$ is decomposable. In this paper, we construct on the cone $P_G$ a family of Wishart distributions which serve a similar purpose in the covariance graph setting as those constructed by Letac and Massam [Ann. Statist. 35 (2007) 1278--1323] and Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272--1317] do in ...
Lorentz covariance of loop quantum gravity
Rovelli, Carlo
2010-01-01
The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the "projected" spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleuler formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are preciseley in K on the boundary. This c...
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G
2009-09-28
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
Accurate covariance estimation of galaxy-galaxy weak lensing: limitations of jackknife covariance
Shirasaki, Masato; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma
2016-01-01
We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations. We populate a real catalog of source galaxies into a light-cone simulation realization, simulate the lensing effect on each galaxy, and then identify lensing halos that are considered to host galaxies or clusters of interest. We use the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing and find that the super-sample covariance (SSC), which arises from density fluctuations with length scales comparable with or greater than a size of survey area, gives a dominant source of the sample variance. We then compare the full covariance with the jackknife (JK) covariance, the method that estimates the covariance from the resamples of the data itself. We show that, although the JK method gives an unbiased estimator of the covariance in the shot noise or Gaussian regime, it always over-estimates the true covariance in the sample variance regime, because the JK covariance turns out to be a...
Covariant Lyapunov vectors for rigid disk systems.
Bosetti, Hadrien; Posch, Harald A
2010-10-05
We carry out extensive computer simulations to study the Lyapunov instability of a two-dimensional hard-disk system in a rectangular box with periodic boundary conditions. The system is large enough to allow the formation of Lyapunov modes parallel to the x-axis of the box. The Oseledec splitting into covariant subspaces of the tangent space is considered by computing the full set of covariant perturbation vectors co-moving with the flow in tangent space. These vectors are shown to be transversal, but generally not orthogonal to each other. Only the angle between covariant vectors associated with immediate adjacent Lyapunov exponents in the Lyapunov spectrum may become small, but the probability of this angle to vanish approaches zero. The stable and unstable manifolds are transverse to each other and the system is hyperbolic.
Manifest Covariant Hamiltonian Theory of General Relativity
Cremaschini, Claudio
2016-01-01
The problem of formulating a manifest covariant Hamiltonian theory of General Relativity in the presence of source fields is addressed, by extending the so-called "DeDonder-Weyl" formalism to the treatment of classical fields in curved space-time. The theory is based on a synchronous variational principle for the Einstein equation, formulated in terms of superabundant variables. The technique permits one to determine the continuum covariant Hamiltonian structure associated with the Einstein equation. The corresponding continuum Poisson bracket representation is also determined. The theory relies on first-principles, in the sense that the conclusions are reached in the framework of a non-perturbative covariant approach, which allows one to preserve both the 4-scalar nature of Lagrangian and Hamiltonian densities as well as the gauge invariance property of the theory.
Hui, Yi; Law, Siu Seong; Ku, Chiu Jen
2017-02-01
Covariance of the auto/cross-covariance matrix based method is studied for the damage identification of a structure with illustrations on its advantages and limitations. The original method is extended for structures under direct white noise excitations. The auto/cross-covariance function of the measured acceleration and its corresponding derivatives are formulated analytically, and the method is modified in two new strategies to enable successful identification with much fewer sensors. Numerical examples are adopted to illustrate the improved method, and the effects of sampling frequency and sampling duration are discussed. Results show that the covariance of covariance calculated from responses of higher order modes of a structure play an important role to the accurate identification of local damage in a structure.
Activities on covariance estimation in Japanese Nuclear Data Committee
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Luo, Xiaodong
2013-10-01
This article examines the influence of covariance inflation on the distance between the measured observation and the simulated (or predicted) observation with respect to the state estimate. In order for the aforementioned distance to be bounded in a certain interval, some sufficient conditions are derived, indicating that the covariance inflation factor should be bounded in a certain interval, and that the inflation bounds are related to the maximum and minimum eigenvalues of certain matrices. Implications of these analytic results are discussed, and a numerical experiment is presented to verify the validity of the analysis conducted.
Notes on Cosmic Censorship Conjecture revisited: Covariantly
Hamid, Aymen I M; Maharaj, Sunil D
2016-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general Locally Rotationally Symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible.
A covariant formulation of classical spinning particle
Cho, J H; Kim, J K; Jin-Ho Cho; Seungjoon Hyun; Jae-Kwan Kim
1994-01-01
Covariantly we reformulate the description of a spinning particle in terms of the which entails all possible constraints explicitly; all constraints can be obtained just from the Lagrangian. Furthermore, in this covariant reformulation, the Lorentz element is to be considered to evolve the momentum or spin component from an arbitrary fixed frame and not just from the particle rest frame. In distinction with the usual formulation, our system is directly comparable with the pseudo-classical formulation. We get a peculiar symmetry which resembles the supersymmetry of the pseudo-classical formulation.
A violation of the covariant entropy bound?
Masoumi, Ali
2014-01-01
Several arguments suggest that the entropy density at high energy density $\\rho$ should be given by the expression $s=K\\sqrt{\\rho/G}$, where $K$ is a constant of order unity. On the other hand the covariant entropy bound requires that the entropy on a light sheet be bounded by $A/4G$, where $A$ is the area of the boundary of the sheet. We find that in a suitably chosen cosmological geometry, the above expression for $s$ violates the covariant entropy bound. We consider different possible explanations for this fact; in particular the possibility that entropy bounds should be defined in terms of volumes of regions rather than areas of surfaces.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
The Massless Spectrum of Covariant Superstrings
Grassi, P A; van Nieuwenhuizen, P
2002-01-01
We obtain the correct cohomology at any ghost number for the open and closed covariant superstring, quantized by an approach which we recently developed. We define physical states by the usual condition of BRST invariance and a new condition involving a new current which is related to a grading of the underlying affine Lie algebra.
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank def
Covariant formulation of pion-nucleon scattering
Lahiff, A. D.; Afnan, I. R.
A covariant model of elastic pion-nucleon scattering based on the Bethe-Salpeter equation is presented. The kernel consists of s- and u-channel nucleon and delta poles, along with rho and sigma exchange in the t-channel. A good fit is obtained to the s- and p-wave phase shifts up to the two-pion production threshold.
Approximate methods for derivation of covariance data
Tagesen, S. [Vienna Univ. (Austria). Inst. fuer Radiumforschung und Kernphysik; Larson, D.C. [Oak Ridge National Lab., TN (United States)
1992-12-31
Several approaches for the derivation of covariance information for evaluated nuclear data files (EFF2 and ENDF/B-VI) have been developed and used at IRK and ORNL respectively. Considerations, governing the choice of a distinct method depending on the quantity and quality of available data are presented, advantages/disadvantages are discussed and examples of results are given.
Covariance of noncommutative Grassmann star product
Daoud, M.
2004-01-01
Using the Coherent states of many fermionic degrees of freedom labeled by Gra\\ss mann variables, we introduce the noncommutative (precisely non anticommutative) Gra\\ss mann star product. The covariance of star product under unitary transformations, particularly canonical ones, is studied. The super star product, based on supercoherent states of supersymmetric harmonic oscillator, is also considered.
Covariance of the selfdual vector model
2004-01-01
The Poisson algebra between the fields involved in the vectorial selfdual action is obtained by means of the reduced action. The conserved charges associated with the invariance under the inhomogeneous Lorentz group are obtained and its action on the fields. The covariance of the theory is proved using the Schwinger-Dirac algebra. The spin of the excitations is discussed.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Linear transformations of variance/covariance matrices
Parois, P.J.A.; Lutz, M.
2011-01-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance
Covariant Photon Quantization in the SME
Colladay, Don
2013-01-01
The Gupta Bleuler quantization procedure is applied to the SME photon sector. A direct application of the method to the massless case fails due to an unavoidable incompleteness in the polarization states. A mass term can be included into the photon lagrangian to rescue the quantization procedure and maintain covariance.
Covariant derivative expansion of the heat kernel
Salcedo, L.L. [Universidad de Granada, Departamento de Fisica Moderna, Granada (Spain)
2004-11-01
Using the technique of labeled operators, compact explicit expressions are given for all traced heat kernel coefficients containing zero, two, four and six covariant derivatives, and for diagonal coefficients with zero, two and four derivatives. The results apply to boundaryless flat space-times and arbitrary non-Abelian scalar and gauge background fields. (orig.)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2004-01-01
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....
Unravelling Lorentz Covariance and the Spacetime Formalism
Cahill R. T.
2008-10-01
Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space-time construct are consistent with the existence of a dynamical 3-space, and absolute motion. We illustrate this mapping first with the standard theory of sound, as vibrations of a medium, which itself may be undergoing fluid motion, and which is covariant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under Lorentz transformations wherein the speed of sound is now the invariant speed. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian spacetime description of sound, with a metric characterised by an invariant speed of sound. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equations were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a allowing dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations
Unravelling Lorentz Covariance and the Spacetime Formalism
Cahill R. T.
2008-10-01
Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space- time construct are consistent with the existence of a dynamical 3-space, and “absolute motion”. We illustrate this mapping first with the standard theory of sound, as vibra- tions of a medium, which itself may be undergoing fluid motion, and which is covari- ant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under “Lorentz transformations” wherein the speed of sound is now the “invariant speed”. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian “spacetime” description of sound, with a metric characterised by an “invariant speed of sound”. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equa- tions were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a “flowing” dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Janes, Holly; Pepe, Margaret S
2009-06-01
Recent scientific and technological innovations have produced an abundance of potential markers that are being investigated for their use in disease screening and diagnosis. In evaluating these markers, it is often necessary to account for covariates associated with the marker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. In this paper, we propose the covariate-adjusted receiver operating characteristic curve, a measure of covariate-adjusted classification accuracy. Nonparametric and semiparametric estimators are proposed, asymptotic distribution theory is provided and finite sample performance is investigated. For illustration we characterize the age-adjusted discriminatory accuracy of prostate-specific antigen as a biomarker for prostate cancer.
Chi, Zhiyi
2010-01-01
Two extensions of generalized linear models are considered. In the first one, response variables depend on multiple linear combinations of covariates. In the second one, only response variables are observed while the linear covariates are missing. We derive stochastic Lipschitz continuity results for the loss functions involved in the regression problems and apply them to get bounds on estimation error for Lasso. Multivariate comparison results on Rademacher complexity are obtained as tools to establish the stochastic Lipschitz continuity results.
Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model
无
2006-01-01
In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan
2011-10-10
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Covariant Quantization of CPT-violating Photons
Colladay, D; Noordmans, J P; Potting, R
2016-01-01
We perform the covariant canonical quantization of the CPT- and Lorentz-symmetry-violating photon sector of the minimal Standard-Model Extension, which contains a general (timelike, lightlike, or spacelike) fixed background tensor $k_{AF}^\\mu$. Well-known stability issues, arising from complex-valued energy states, are solved by introducing a small photon mass, orders of magnitude below current experimental bounds. We explicitly construct a covariant basis of polarization vectors, in which the photon field can be expanded. We proceed to derive the Feynman propagator and show that the theory is microcausal. Despite the occurrence of negative energies and vacuum-Cherenkov radiation, we do not find any runaway stability issues, because the energy remains bounded from below. An important observation is that the ordering of the roots of the dispersion relations is the same in any observer frame, which allows for a frame-independent condition that selects the correct branch of the dispersion relation. This turns ou...
On covariance structure in noisy, big data
Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.
2013-09-01
Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.
Covariant holography of a tachyonic accelerating universe
Rozas-Fernández, Alberto
2014-01-01
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state $w=p/\\rho$, both for $w>-1$ and $w<-1$. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analysed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of a S matrix at infinite distances.
Model selection for Poisson processes with covariates
Sart, Mathieu
2011-01-01
We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
Linear transformations of variance/covariance matrices.
Parois, Pascal; Lutz, Martin
2011-07-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance matrix. For the transformation of second-rank tensors it is suggested that the 3 × 3 matrix is re-written into a 9 × 1 vector. The transformation of the corresponding variance/covariance matrix is then straightforward and easily implemented into computer software. This method is applied in the transformation of anisotropic displacement parameters, the calculation of equivalent isotropic displacement parameters, the comparison of refinements in different space-group settings and the calculation of standard uncertainties of eigenvalues.
Covariant holography of a tachyonic accelerating universe
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
Chiral Four-Dimensional Heterotic Covariant Lattices
Beye, Florian
2014-01-01
In the covariant lattice formalism, chiral four-dimensional heterotic string vacua are obtained from certain even self-dual lattices which completely decompose into a left-mover and a right-mover lattice. The main purpose of this work is to classify all right-mover lattices that can appear in such a chiral model, and to study the corresponding left-mover lattices using the theory of lattice genera. In particular, the Smith-Minkowski-Siegel mass formula is employed to calculate a lower bound on the number of left-mover lattices. Also, the known relationship between asymmetric orbifolds and covariant lattices is considered in the context of our classification.
Twisted Covariant Noncommutative Self-dual Gravity
Estrada-Jimenez, S; Obregón, O; Ramírez, C
2008-01-01
A twisted covariant formulation of noncommutative self-dual gravity is presented. The recent formulation introduced by J. Wess and coworkers for constructing twisted Yang-Mills fields is used. It is shown that the noncommutative torsion is solved at any order of the $\\theta$-expansion in terms of the tetrad and the extra fields of the theory. In the process the first order expansion in $\\theta$ for the Pleba\\'nski action is explicitly obtained.
Covariant quantization of the CBS superparticle
Grassi, P.A. E-mail: pag5@nyu.edu; Policastro, G.; Porrati, M
2001-07-09
The quantization of the Casalbuoni-Brink-Schwarz superparticle is performed in an explicitly covariant way using the antibracket formalism. Since an infinite number of ghost fields are required, within a suitable off-shell twistor-like formalism, we are able to fix the gauge of each ghost sector without modifying the physical content of the theory. The computation reveals that the antibracket cohomology contains only the physical degrees of freedom.
Adaptive Covariance Estimation with model selection
Biscay, Rolando; Loubes, Jean-Michel
2012-01-01
We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.
Economical phase-covariant cloning with multiclones
Zhang Wen-Hai; Ye Liu
2009-01-01
This paper presents a very simple method to derive the explicit transformations of the optimal economical to M phase-covariant cloning. The fidelity of clones reaches the theoretic bound [D'Ariano G M and Macchiavello C 2003 Phys. Rcv. A 67 042306]. The derived transformations cover the previous contributions [Delgado Y,Lamata L et al,2007 Phys. Rev. Lett. 98 150502] in which M must be odd.
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Risk evaluation with enhaced covariance matrix
Urbanowicz, K; Richmond, P; Holyst, Janusz A.; Richmond, Peter; Urbanowicz, Krzysztof
2006-01-01
We propose a route for the evaluation of risk based on a transformation of the covariance matrix. The approach uses a `potential' or `objective' function. This allows us to rescale data from diferent assets (or sources) such that each set then has similar statistical properties in terms of their probability distributions. The method is tested using historical data from both the New York and Warsaw Stock Exchanges.
Superfield quantization in Sp(2) covariant formalism
Lavrov, P M
2001-01-01
The rules of the superfield Sp(2) covariant quantization of the arbitrary gauge theories for the case of the introduction of the gauging with the derivative equations for the gauge functional are generalized. The possibilities of realization of the expanded anti-brackets are considered and it is shown, that only one of the realizations is compatible with the transformations of the expanded BRST-symmetry in the form of super translations along the Grassmann superspace coordinates
Covariant quantization of the CBS superparticle
Grassi, P. A.; Policastro, G.; Porrati, M.
2001-07-01
The quantization of the Casalbuoni-Brink-Schwarz superparticle is performed in an explicitly covariant way using the antibracket formalism. Since an infinite number of ghost fields are required, within a suitable off-shell twistor-like formalism, we are able to fix the gauge of each ghost sector without modifying the physical content of the theory. The computation reveals that the antibracket cohomology contains only the physical degrees of freedom.
Torsion and geometrostasis in covariant superstrings
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Linear Covariance Analysis for a Lunar Lander
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
ANL Critical Assembly Covariance Matrix Generation
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-15
This report discusses the generation of a covariance matrix for selected critical assemblies that were carried out by Argonne National Laboratory (ANL) using four critical facilities-all of which are now decommissioned. The four different ANL critical facilities are: ZPR-3 located at ANL-West (now Idaho National Laboratory- INL), ZPR-6 and ZPR-9 located at ANL-East (Illinois) and ZPPr located at ANL-West.
Covariant Calculus for Effective String Theories
Dass, N. D. Hari; Matlock, Peter
2007-01-01
A covariant calculus for the construction of effective string theories is developed. Effective string theory, describing quantum string-like excitations in arbitrary dimension, has in the past been constructed using the principles of conformal field theory, but not in a systematic way. Using the freedom of choice of field definition, a particular field definition is made in a systematic way to allow an explicit construction of effective string theories with manifest exact conformal symmetry. ...
Covariates of Craving in Actively Drinking Alcoholics
Chakravorty, Subhajit; Kuna, Samuel T.; Zaharakis, Nikola; O’Brien, Charles P.; Kampman, Kyle M.; Oslin, David
2010-01-01
The goal of this cross-sectional study was to assess the relationship of alcohol craving with biopsychosocial and addiction factors that are clinically pertinent to alcoholism treatment. Alcohol craving was assessed in 315 treatment-seeking, alcohol dependent subjects using the PACS questionnaire. Standard validated questionnaires were used to evaluate a variety of biological, addiction, psychological, psychiatric, and social factors. Individual covariates of craving included age, race, probl...
How covariant is the galaxy luminosity function?
Smith, Robert E
2012-01-01
We investigate the error properties of certain galaxy luminosity function (GLF) estimators. Using a cluster expansion of the density field, we show how, for both volume and flux limited samples, the GLF estimates are covariant. The covariance matrix can be decomposed into three pieces: a diagonal term arising from Poisson noise; a sample variance term arising from large-scale structure in the survey volume; an occupancy covariance term arising due to galaxies of different luminosities inhabiting the same cluster. To evaluate the theory one needs: the mass function and bias of clusters, and the conditional luminosity function (CLF). We use a semi-analytic model (SAM) galaxy catalogue from the Millennium run N-body simulation and the CLF of Yang et al. (2003) to explore these effects. The GLF estimates from the SAM and the CLF qualitatively reproduce results from the 2dFGRS. We also measure the luminosity dependence of clustering in the SAM and find reasonable agreement with 2dFGRS results for bright galaxies. ...
Development of covariance capabilities in EMPIRE code
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Performance evaluation of sensor allocation algorithm based on covariance control
无
2005-01-01
The covariance control capability of sensor allocation algorithms based on covariance control strategy is an important index to evaluate the performance of these algorithms. Owing to lack of standard performance metric indices to evaluate covariance control capability, sensor allocation ratio, etc, there are no guides to follow in the design procedure of sensor allocation algorithm in practical applications. To meet these demands, three quantified performance metric indices are presented, which are average covariance misadjustment quantity (ACMQ), average sensor allocation ratio (ASAR) and matrix metric influence factor (MMIF), where ACMQ, ASAR and MMIF quantify the covariance control capability, the usage of sensor resources and the robustness of sensor allocation algorithm, respectively. Meanwhile, a covariance adaptive sensor allocation algorithm based on a new objective function is proposed to improve the covariance control capability of the algorithm based on information gain. The experiment results show that the proposed algorithm have the advantage over the preceding sensor allocation algorithm in covariance control capability and robustness.
Covariant equations for the NN-πNN system
Phillips, D. R.; Afnan, I. R.
1995-05-01
We explain the deficiencies of the current NN-πNN equations, sketch the derivation of a set of covariant NN-πNN equations and describe the ways in which these equations differ from previous sets of covariant equations.
Symmetry and Covariance of Non-relativistic Quantum Mechanics
Omote, Minoru; kamefuchi, Susumu
2000-01-01
On the basis of a 5-dimensional form of space-time transformations non-relativistic quantum mechanics is reformulated in a manifestly covariant manner. The resulting covariance resembles that of the conventional relativistic quantum mechanics.
Yan, Yuan
2017-07-13
Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Adaptive automatic sleep stage classification under covariate shift.
Khalighi, Sirvan; Sousa, Teresa; Nunes, Urbano
2012-01-01
Current automatic sleep stage classification (ASSC) methods that rely on polysomnographic (PSG) signals suffer from inter-subject differences that make them unreliable in facing with new and different subjects. A novel adaptive sleep scoring method based on unsupervised domain adaptation, aiming to be robust to inter-subject variability, is proposed. We assume that the sleep quality variants follow a covariate shift model, where only the sleep features distribution change in the training and test phases. The maximum overlap discrete wavelet transform (MODWT) is applied to extract relevant features from EEG, EOG and EMG signals. A set of significant features are selected by minimum-redundancy maximum-relevance (mRMR) which is a powerful feature selection method. Finally, an instance-weighting method, namely the importance weighted kernel logistic regression (IWKLR) is applied for the purpose of obtaining adaptation in classification. The classification results using leave one out cross-validation (LOOCV), show that the proposed method performs at the state-of-the art in the field of ASSC.
Earth Observation System Flight Dynamics System Covariance Realism
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Covariant Quantum Gravity with Continuous Quantum Geometry I: Covariant Hamiltonian Framework
Pilc, Marián
2016-01-01
The first part of the series is devoted to the formulation of the Einstein-Cartan Theory within the covariant hamiltonian framework. In the first section the general multisymplectic approach is revised and the notion of the d-jet bundles is introduced. Since the whole Standard Model Lagrangian (including gravity) can be written as the functional of the forms, the structure of the d-jet bundles is more appropriate for the covariant hamiltonian analysis than the standard jet bundle approach. The definition of the local covariant Poisson bracket on the space of covariant observables is recalled. The main goal of the work is to show that the gauge group of the Einstein-Cartan theory is given by the semidirect product of the local Lorentz group and the group of spacetime diffeomorphisms. Vanishing of the integral generators of the gauge group is equivalent to equations of motion of the Einstein-Cartan theory and the local covariant algebra generated by Noether's currents is closed Lie algebra.
Maximum MIMO System Mutual Information with Antenna Selection and Interference
Rick S. Blum
2004-05-01
Full Text Available Maximum system mutual information is considered for a group of interfering users employing single user detection and antenna selection of multiple transmit and receive antennas for flat Rayleigh fading channels with independent fading coefficients for each path. In the case considered, the only feedback of channel state information to the transmitter is that required for antenna selection, but channel state information is assumed at the receiver. The focus is on extreme cases with very weak interference or very strong interference. It is shown that the optimum signaling covariance matrix is sometimes different from the standard scaled identity matrix. In fact, this is true even for cases without interference if SNR is sufficiently weak. Further, the scaled identity matrix is actually that covariance matrix that yields worst performance if the interference is sufficiently strong.
Quantum energy inequalities and local covariance II: categorical formulation
Fewster, Christopher J.
2007-11-01
We formulate quantum energy inequalities (QEIs) in the framework of locally covariant quantum field theory developed by Brunetti, Fredenhagen and Verch, which is based on notions taken from category theory. This leads to a new viewpoint on the QEIs, and also to the identification of a new structural property of locally covariant quantum field theory, which we call local physical equivalence. Covariant formulations of the numerical range and spectrum of locally covariant fields are given and investigated, and a new algebra of fields is identified, in which fields are treated independently of their realisation on particular spacetimes and manifestly covariant versions of the functional calculus may be formulated.
Gallilei covariant quantum mechanics in electromagnetic fields
H. E. Wilhelm
1985-01-01
Full Text Available A formulation of the quantum mechanics of charged particles in time-dependent electromagnetic fields is presented, in which both the Schroedinger equation and wave equations for the electromagnetic potentials are Galilei covariant, it is shown that the Galilean relativity principle leads to the introduction of the electromagnetic substratum in which the matter and electromagnetic waves propagate. The electromagnetic substratum effects are quantitatively significant for quantum mechanics in reference frames, in which the substratum velocity w is in magnitude comparable with the velocity of light c. The electromagnetic substratum velocity w occurs explicitly in the wave equations for the electromagnetic potentials but not in the Schroedinger equation.
Inferring Meta-covariates in Classification
Harris, Keith; McMillan, Lisa; Girolami, Mark
This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.
Minimal covariant observables identifying all pure states
Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2013-09-02
It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.
Radiative Transfer in Special Relativity: Covariance
Duque, Mauricio; Duque, Carlos
2007-01-01
The purpose is to introduce in a clear and direct way the students of undergraduate courses in physics and/or astronomy to the subject of radiative transfer. A pedagogical revision is made in order to obtain the radiative transfer equation, its restrictions and the different types of interactions present between the radiation and the matter. Because in the classical literature about radiative transfer the covariance is not fully developed, we show in an explicit manner detail calculations and then we discuss the relativistic effects.
Covariant harmonic oscillators and coupled harmonic oscillators
Han, Daesoo; Kim, Young S.; Noz, Marilyn E.
1995-01-01
It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.
Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates
Han, Heejoon; Kristensen, Dennis
This paper investigates the asymptotic properties of the Gaussian quasi-maximum-likelihood estimators (QMLE’s) of the GARCH model augmented by including an additional explanatory variable - the so-called GARCH-X model. The additional covariate is allowed to exhibit any degree of persistence as ca...
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data
Jun, Sung C [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Plis, Sergey M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Ranken, Doug M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schmidt, David M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2006-11-07
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
Shrinkage covariance matrix approach for microarray data
Karjanto, Suryaefiza; Aripin, Rasimah
2013-04-01
Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
Covariant Entropy Bound and Padmanabhan's Emergent Paradigm
Hadi, H; Darabi, F
2016-01-01
The covariant entropy conjecture is invariant under time reversal and consequently its origin must be statistical rather than thermodynamical. This may impose a fundamental constraint on the number of degrees of freedom in nature. Indeed, the covariant entropy bound imposes an upper entropy bound for any physical system. Considering a cosmological system, we show that Padmanabhan's emergent paradigm, which indicates that the emergence of cosmic space is due to the discrepancy between the surface and bulk degrees of freedom, leads to a lower entropy bound. The lower and upper entropy bounds may coincide on the apparent horizon for the radiation field and dark energy with the equations of state $\\omega=\\frac{1}{3}$ and $\\omega=-1$, respectively. Moreover, the maximal entropy inside the apparent horizon occurs when it is filled completely by the radiation field or dark energy. It turns out that for dark energy case (pure de Sitter space)\\ the holographic principle is satisfied in the sense that the number of deg...
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
Estimating the power spectrum covariance matrix with fewer mock samples
Pearson, David W
2015-01-01
The covariance matrices of power-spectrum (P(k)) measurements from galaxy surveys are difficult to compute theoretically. The current best practice is to estimate covariance matrices by computing a sample covariance of a large number of mock catalogues. The next generation of galaxy surveys will require thousands of large volume mocks to determine the covariance matrices to desired accuracy. The errors in the inverse covariance matrix are larger and scale with the number of P(k) bins, making the problem even more acute. We develop a method of estimating covariance matrices using a theoretically justified, few-parameter model, calibrated with mock catalogues. Using a set of 600 BOSS DR11 mock catalogues, we show that a seven parameter model is sufficient to fit the covariance matrix of BOSS DR11 P(k) measurements. The covariance computed with this method is better than the sample covariance at any number of mocks and only ~100 mocks are required for it to fully converge and the inverse covariance matrix conver...
Holographic bound in covariant loop quantum gravity
Tamaki, Takashi
2016-01-01
We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulae which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulae. These results tell us that the holographic bound is satisfied in the large area limit and correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulae are also useful in this case. By applying the formulae, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this ...
EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.
HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.
2007-04-22
The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.
The covariance of GPS coordinates and frames
Lachièze-Rey, M
2006-01-01
We explore, in the general relativistic context, the properties of the recently introduced GPS coordinates, as well as those of the associated frames and coframes. We show that they are covariant, and completely independent of any observer. We show that standard spectroscopic and astrometric observations allow any observer to measure (i) the values of the GPS coordinates at his position, (ii) the components of his [four-]velocity and (iii) the components of the metric in the GPS frame. This provides to this system an unique value both for conceptual discussion (no frame dependence) and for practical use (involved quantities are directly measurable): localisation, motion monitoring, astrometry, cosmography, tests of gravitation theories. We show explicitly, in the general relativistic context, how an observer may estimate its position and motion, and reconstruct the components of the metric. This arises from two main results: the extension of the velocity fields of the probes to the whole (curved) spacetime; a...
Covariant Hyperbolization of Force-free Electrodynamics
Carrasco, Federico
2016-01-01
Force-Free Flectrodynamics (FFE) is a non-linear system of equations modeling the evolution of the electromagnetic field, in the presence of a magnetically dominated relativistic plasma. This configuration arises on several astrophysical scenarios, which represent exciting laboratories to understand physics in extreme regimes. We show that this system, when restricted to the correct constraint submanifold, is symmetric hyperbolic. In numerical applications is not feasible to keep the system in that submanifold, and so, it is necessary to analyze its structure first in the tangent space of that submanifold and then in a whole neighborhood of it. As already shown by Pfeiffer, a direct (or naive) formulation of this system (in the whole tangent space) results in a weakly hyperbolic system of evolution equations for which well-possednes for the initial value formulation does not follows. Using the generalized symmetric hyperbolic formalism due to Geroch, we introduce here a covariant hyperbolization for the FFE s...
Supergeometry in locally covariant quantum field theory
Hack, Thomas-Paul; Schenkel, Alexander
2015-01-01
In this paper we analyze supergeometric locally covariant quantum field theories. We develop suitable categories SLoc of super-Cartan supermanifolds, which generalize Lorentz manifolds in ordinary quantum field theory, and show that, starting from a few representation theoretic and geometric data, one can construct a functor A : SLoc --> S*Alg to the category of super-*-algebras which can be interpreted as a non-interacting super-quantum field theory. This construction turns out to disregard supersymmetry transformations as the morphism sets in the above categories are too small. We then solve this problem by using techniques from enriched category theory, which allows us to replace the morphism sets by suitable morphism supersets that contain supersymmetry transformations as their higher superpoints. We construct super-quantum field theories in terms of enriched functors eA : eSLoc --> eS*Alg between the enriched categories and show that supersymmetry transformations are appropriately described within the en...
Covariant non-commutative space–time
Jonathan J. Heckman
2015-05-01
Full Text Available We introduce a covariant non-commutative deformation of 3+1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space–time isometries. The non-commutative algebra is defined on space–times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so(5,1, while for AdS4 it assembles into so(4,2. The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
Variance and covariance of accumulated displacement estimates.
Bayer, Matthew; Hall, Timothy J
2013-04-01
Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, whereas errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel.
Baryon Spectrum Analysis using Covariant Constraint Dynamics
Whitney, Joshua; Crater, Horace
2012-03-01
The energy spectrum of the baryons is determined by treating each of them as a three-body system with the interacting forces coming from a set of two-body potentials that depend on both the distance between the quarks and the spin and orbital angular momentum coupling terms. The Two Body Dirac equations of constraint dynamics derived by Crater and Van Alstine, matched with the quasipotential formalism of Todorov as the underlying two-body formalism are used, as well as the three-body constraint formalism of Sazdjian to integrate the three two-body equations into a single relativistically covariant three body equation for the bound state energies. The results are analyzed and compared to experiment using a best fit method and several different algorithms, including a gradient approach, and Monte Carlo method. Results for all well-known baryons are presented and compared to experiment, with good accuracy.
Noncommutative Spacetime Symmetries from Covariant Quantum Mechanics
Alessandro Moia
2017-01-01
Full Text Available In the last decades, noncommutative spacetimes and their deformed relativistic symmetries have usually been studied in the context of field theory, replacing the ordinary Minkowski background with an algebra of noncommutative coordinates. However, spacetime noncommutativity can also be introduced into single-particle covariant quantum mechanics, replacing the commuting operators representing the particle’s spacetime coordinates with noncommuting ones. In this paper, we provide a full characterization of a wide class of physically sensible single-particle noncommutative spacetime models and the associated deformed relativistic symmetries. In particular, we prove that they can all be obtained from the standard Minkowski model and the usual Poincaré transformations via a suitable change of variables. Contrary to previous studies, we find that spacetime noncommutativity does not affect the dispersion relation of a relativistic quantum particle, but only the transformation properties of its spacetime coordinates under translations and Lorentz transformations.
Multisymplectic formalism and the covariant phase
Hélein, Frédéric
2011-01-01
The formulation of a relativistic dynamical problem as a system of Hamilton equations by respecting the principles of Relativity is a delicate task, because in their classical form the Hamilton equations require the use of a time coordinate, which of course contradicts the Relativity. Two interesting solutions have been proposed during the last century: the covariant phase space and the multisymplectic formalism. These two approaches were inspired at the beginning by different points of view. However, as shown by works by Kijowski-Szczyrba, Forger-Romero and Vitagliano, a synthetic vision of the two theories leads probably to the most satisfactory answer to the basic question of understanding the Hamiltonian structure of relativistic fields theory.
Universal Gravitation as Lorentz-covariant Dynamics
Kauffmann, Steven Kenneth
2014-01-01
Einstein's equivalence principle implies that the acceleration of a particle in a "specified" gravitational field is independent of its mass. While this is certainly true to great accuracy for bodies we observe in the Earth's gravitational field, a hypothetical body of mass comparable to the Earth's would perceptibly cause the Earth to fall toward it, which would feed back into the strength as a function of time of the Earth's gravitational field affecting that body. In short, Einstein's equivalence principle isn't exact, but is an approximation that ignores recoil of the "specified" gravitational field, which sheds light on why general relativity has no clearly delineated native embodiment of conserved four-momentum. Einstein's 1905 relativity of course doesn't have the inexactitudes he unwittingly built into GR, so it is natural to explore a Lorentz-covariant gravitational theory patterned directly on electromagnetism, wherein a system's zero-divergence overall stress-energy, including all gravitational fee...
Flavour Covariant Formalism for Resonant Leptogenesis
Dev, P S Bhupal; Pilaftsis, Apostolos; Teresi, Daniele
2014-01-01
We present a fully flavour-covariant formalism for transport phenomena and apply it to study the flavour-dynamics of Resonant Leptogenesis (RL). We show that this formalism provides a complete and unified description of RL, consistently accounting for three distinct physical phenomena: (i) resonant mixing and (ii) coherent oscillations between different heavy-neutrino flavours, as well as (iii) quantum decoherence effects in the charged-lepton sector. We describe the necessary emergence of higher-rank tensors in flavour space, arising from the unitarity cuts of partial self-energies. Finally, we illustrate the importance of this formalism within a minimal Resonant $\\tau$-Genesis model by showing that, with the inclusion of all flavour effects in a consistent way, the final lepton asymmetry can be enhanced by up to an order of magnitude, when compared to previous partially flavour-dependent treatments.
Jarmołowski, Wojciech
2017-07-01
Maximum likelihood (ML) and restricted maximum likelihood (REML) are nowadays very popular in geophysics, geodesy and many other fields. There is also a growing number of investigations into how to calculate covariance parameters by ML/REML accurately and fast, and assure the convergence of the iteration steps in derivative-based approaches. The latter condition is not satisfied in many solutions, as it requires composed procedures or takes an unacceptable amount of time. The article implements efficient Fisher scoring (FS) to covariance parameter estimation in least-squares collocation (LSC). FS is optimized through Levenberg-Marquardt (LM) optimization, which provides stability in convergence when estimating two covariance parameters necessary for LSC. The motivation for this work was a very large number of non-optimized FS in the literature, as well as a deficiency of its scientific and engineering applications. The example work adds some usefulness to maximum likelihood estimation (ML) and FS and shows a new application—an alternative approach to LSC—a parametrization with no empirical covariance estimation. The results of LM damping applied to FS (FSLM) require some additional research related with optimal LM parameter. However, the method appears to be a milestone in relation to non-optimized FS, in terms of convergence. The FS with LM provides a reliable convergence, whose speed can be adjusted by manipulating the LM parameter.
Curvature and Quantum Mechanics on Covariant Causal Sets
Gudder, Stanley
2015-01-01
This article begins by reviewing the causal set approach in discrete quantum gravity. In our version of this approach a special role is played by covariant causal sets which we call $c$-causets. The importance of $c$-causets is that they support the concepts of a natural distance function, geodesics and curvature in a discrete setting. We then discuss curvature in more detail. By considering $c$-causets with a maximum and minimum number of paths, we are able to find $c$-causets with large and small average curvature. We then briefly discuss our previous work on the inflationary period when the curvature was essentially zero. Quantum mechanics on $c$-causets is considered next. We first introduce a free wave equation for $c$-causets. We then show how the state of a particle with a specified mass (or energy) can be derived from the wave equation. It is demonstrated for small examples that quantum mechanics predicts that particles tend to move toward vertices with larger curvature.
Baryon Wave Functions in Covariant Relativistic Quark Models
Dillig, M
2002-01-01
We derive covariant baryon wave functions for arbitrary Lorentz boosts. Modeling baryons as quark-diquark systems, we reduce their manifestly covariant Bethe-Salpeter equation to a covariant 3-dimensional form by projecting on the relative quark-diquark energy. Guided by a phenomenological multigluon exchange representation of a covariant confining kernel, we derive for practical applications explicit solutions for harmonic confinement and for the MIT Bag Model. We briefly comment on the interplay of boosts and center-of-mass corrections in relativistic quark models.
Kriging approach for the experimental cross-section covariances estimation
Garlaud A.
2013-03-01
Full Text Available In the classical use of a generalized χ2 to determine the evaluated cross section uncertainty, we need the covariance matrix of the experimental cross sections. The usual propagation error method to estimate the covariances is hardly usable and the lack of data prevents from using the direct empirical estimator. We propose in this paper to apply the kriging method which allows to estimate the covariances via the distances between the points and with some assumptions on the covariance matrix structure. All the results are illustrated with the 2555Mn nucleus measurements.
On the Validity of Covariate Adjustment for Estimating Causal Effects
Shpitser, Ilya; Robins, James M
2012-01-01
Identifying effects of actions (treatments) on outcome variables from observational data and causal assumptions is a fundamental problem in causal inference. This identification is made difficult by the presence of confounders which can be related to both treatment and outcome variables. Confounders are often handled, both in theory and in practice, by adjusting for covariates, in other words considering outcomes conditioned on treatment and covariate values, weighed by probability of observing those covariate values. In this paper, we give a complete graphical criterion for covariate adjustment, which we term the adjustment criterion, and derive some interesting corollaries of the completeness of this criterion.
Relativistic Covariance and Quark-Diquark Wave Functions
Dillig, M
2006-01-01
We derive covariant wave functions for hadrons composed of two constituents for arbitrary Lorentz boosts. Focussing explicitly on baryons as quark-diquark systems, we reduce their manifestly covariant Bethe-Salpeter equation to covariant 3-dimensional forms by projecting on the relative quark-diquark energy. Guided by a phenomenological multi gluon exchange representation of covariant confining kernels, we derive explicit solutions for harmonic confinement and for the MIT Bag Model. We briefly sketch implications of breaking the spherical symmetry of the ground state and the transition from the instant form to the light cone via the infinite momentum frame.
Cognitive Radio Spectrum Sensing Algorithms based on Eigenvalue and Covariance methods
K.SESHU KUMAR
2013-04-01
Full Text Available Spectrum sensing method is the fundamental factor when we are working with cognitive radio systems. Main aim and fundamental problem of cognitive radio is to identify weather primary users in authorized or licensed spectrum is presented or not. Paper deals with a new scheme of sensing based on the eigenvalues concept. It contain signals of covariance matrix received by the secondary users. In this method we are suggested two algorithms of sensing, one algorithm established by the maximum to minimum eigenvalue ratio. Other algorithm focused on average to minimum eigenvalue ratio. These two are done by using random matrix theories (RMT, and also these RMT are latest and also produce some accurate results. Now we calculate the ratios of distributions and probabilities of detection (Pd and derive the probabilities of false alarm (Pfa for the proposed algorithms, and also finding thresholds values for given Pfa. This method will improve the problem of noise uncertainty, and also performance isimproved compare to energy detection when highly correlated signal is available. Paper also deals with another method is and also covariance methods. First one is statistical covariance method, it has different noise and received signal, and it is used for finding the primary users presence where there is only noise. These algorithms implemented by use of small number of received signal samples and processed to calculate the sample covariance matrix. By use of sample covariance matrix we are extracted two test statistics. Finally we compare these results and concluded that signal presence. These are used in many signal detection applications, and also do not need signal information, also noise power and channel. We did the Simulations based on two ways. First one is randomly generated signals. Other one is done by captured DTV signals taken from ATSV committee, these are broadcasting signals. These methods confirm and verifies the efficiency of the proposed
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
Bryan, M.F.; Piepel, G.F.; Simpson, D.B.
1996-03-01
The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Covariant and quasi-covariant quantum dynamics in Robertson-Walker space-times
Buchholz, D; Summers, S J; Buchholz, Detlev; Mund, Jens; Summers, Stephen J.
2002-01-01
We propose a canonical description of the dynamics of quantum systems on a class of Robertson-Walker space-times. We show that the worldline of an observer in such space-times determines a unique orbit in the local conformal group SO(4,1) of the space-time and that this orbit determines a unique transport on the space-time. For a quantum system on the space-time modeled by a net of local algebras, the associated dynamics is expressed via a suitable family of ``propagators''. In the best of situations, this dynamics is covariant, but more typically the dynamics will be ``quasi-covariant'' in a sense we make precise. We then show by using our technique of ``transplanting'' states and nets of local algebras from de Sitter space to Robertson-Walker space that there exist quantum systems on Robertson-Walker spaces with quasi-covariant dynamics. The transplanted state is locally passive, in an appropriate sense, with respect to this dynamics.
Khoury, Justin; Tolley, Andrew J
2014-01-01
Traditional derivations of general relativity from the graviton degrees of freedom assume space-time Lorentz covariance as an axiom. In this essay, we survey recent evidence that general relativity is the unique spatially-covariant effective field theory of the transverse, traceless graviton degrees of freedom. The Lorentz covariance of general relativity, having not been assumed in our analysis, is thus plausibly interpreted as an accidental or emergent symmetry of the gravitational sector. From this point of view, Lorentz covariance is a necessary feature of low-energy graviton dynamics, not a property of space-time. This result has revolutionary implications for fundamental physics.
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Kampen, van D.
1999-01-01
Postulating that the predisposition to illness in Claridge's disease model of schizophrenia can be equated with the personality dimensions S or Insensitivity, (low) E or Extraversion, and N or Neuroticism, as measured by Van Kampen's 3DPT, and assuming that the mode of transmission of schizophrenia
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Into the Bulk: A Covariant Approach
Engelhardt, Netta
2016-01-01
I propose a general, covariant way of defining when one region is "deeper in the bulk" than another. This definition is formulated outside of an event horizon (or in the absence thereof) in generic geometries; it may be applied to both points and surfaces, and may be used to compare the depth of bulk points or surfaces relative to a particular boundary subregion or relative to the entire boundary. Using the recently proposed "lightcone cut" formalism, the comparative depth between two bulk points can be determined from the singularity structure of Lorentzian correlators in the dual field theory. I prove that, by this definition, causal wedges of progressively larger regions probe monotonically deeper in the bulk. The definition furthermore matches expectations in pure AdS and in static AdS black holes with isotropic spatial slices, where a well-defined holographic coordinate exists. In terms of holographic RG flow, this new definition of bulk depth makes contact with coarse-graining over both large distances ...
A fully covariant description of CMB anisotropies
Dunsby, P K S
1997-01-01
Starting from the exact non-linear description of matter and radiation, a fully covariant and gauge-invariant formula for the observed temperature anisotropy of the cosmic microwave background (CBR) radiation, expressed in terms of the electric ($E_{ab}$) and magnetic ($H_{ab}$) parts of the Weyl tensor, is obtained by integrating photon geodesics from last scattering to the point of observation today. This improves and extends earlier work by Russ et al where a similar formula was obtained by taking first order variations of the redshift. In the case of scalar (density) perturbations, $E_{ab}$ is related to the harmonic components of the gravitational potential $\\Phi_k$ and the usual dominant Sachs-Wolfe contribution $\\delta T_R/\\bar{T}_R\\sim\\Phi_k$ to the temperature anisotropy is recovered, together with contributions due to the time variation of the potential (Rees-Sciama effect), entropy and velocity perturbations at last scattering and a pressure suppression term important in low density universes. We a...
General Covariance from the Quantum Renormalization Group
Shyam, Vasudev
2016-01-01
The Quantum renormalization group (QRG) is a realisation of holography through a coarse graining prescription that maps the beta functions of a quantum field theory thought to live on the `boundary' of some space to holographic actions in the `bulk' of this space. A consistency condition will be proposed that translates into general covariance of the gravitational theory in the $D + 1$ dimensional bulk. This emerges from the application of the QRG on a planar matrix field theory living on the $D$ dimensional boundary. This will be a particular form of the Wess--Zumino consistency condition that the generating functional of the boundary theory needs to satisfy. In the bulk, this condition forces the Poisson bracket algebra of the scalar and vector constraints of the dual gravitational theory to close in a very specific manner, namely, the manner in which the corresponding constraints of general relativity do. A number of features of the gravitational theory will be fixed as a consequence of this form of the Po...
New covariant Lagrange formulation for field theories
Ootsuka, T
2012-01-01
A novel approach for Lagrange formulation for field theories is proposed in terms of Kawaguchi geometry (areal metric space). On the extended configuration space M for classical field theory composed of spacetime and field configuration space, one can define a geometrical structure called Kawaguchi areal metric K from the field Lagrangian and (M,K) can be regarded as Kawaguchi manifold. The geometrical action functional is given by K and the dynamics of field is determined by covariant Euler-Lagrange equation derived from the variational principle of the action. The solution to the equation becomes a minimal hypersurface on (M,K) which has the same dimension as spacetime. We propose that this hypersurface is what we should regard as our real spacetime manifold, while the usual way to understand spacetime is to consider it as the parameter spacetime (base manifold) of a fibre bundle. In this way, the dynamics of field and spacetime structure is unified by Kawaguchi geometry. The theory has the property of stro...
Historical Hamiltonian Dynamics: symplectic and covariant
Lachieze-Rey, M
2016-01-01
This paper presents a "historical" formalism for dynamical systems, in its Hamiltonian version (Lagrangian version was presented in a previous paper). It is universal, in the sense that it applies equally well to time dynamics and to field theories on space-time. It is based on the notion of (Hamiltonian) histories, which are sections of the (extended) phase space bundle. It is developed in the space of sections, in contradistinction with the usual formalism which works in the bundle manifold. In field theories, the formalism remains covariant and does not require a spitting of space-time. It considers space-time exactly in the same manner than time in usual dynamics, both being particular cases of the evolution domain. It applies without modification when the histories (the fields) are forms rather than scalar functions, like in electromagnetism or in tetrad general relativity. We develop a differential calculus in the infinite dimensional space of histories. It admits a (generalized) symplectic form which d...
The covariance of GPS coordinates and frames
Lachieze-Rey, Marc [CNRS APC, UMR 7164 Service d' Astrophysique, CE Saclay, 91191 Gif sur Yvette Cedex (France)
2006-05-21
We explore, in the general relativistic context, the properties of the recently introduced global positioning system (GPS) coordinates, as well as those of the associated frames and coframes that they define. We show that they are covariant and completely independent of any observer. We show that standard spectroscopic and astrometric observations allow any observer to measure (i) the values of the GPS coordinates at his position (ii) the components of his 4-velocity and (iii) the components of the metric in the GPS frame. This provides this system with a unique value both for conceptual discussion (no frame dependence) and for practical use (involved quantities are directly measurable): localization, motion monitoring, astrometry, cosmography and tests of gravitation theories. We show explicitly, in the general relativistic context, how an observer may estimate his position and motion, and reconstruct the components of the metric. This arises from two main results: the extension of the velocity fields of the probes to the whole (curved) spacetime, and the identification of the components of the observer's velocity in the GPS frame with the (inversed) observed redshifts of the probes. Specific cases (non-relativistic velocities, Minkowski and Friedmann-Lemaitre spacetimes, geodesic motions) are studied in detail.
CMB lens sample covariance and consistency relations
Motloch, Pavel; Hu, Wayne; Benoit-Lévy, Aurélien
2017-02-01
Gravitational lensing information from the two and higher point statistics of the cosmic microwave background (CMB) temperature and polarization fields are intrinsically correlated because they are lensed by the same realization of structure between last scattering and observation. Using an analytic model for lens sample covariance, we show that there is one mode, separately measurable in the lensed CMB power spectra and lensing reconstruction, that carries most of this correlation. Once these measurements become lens sample variance dominated, this mode should provide a useful consistency check between the observables that is largely free of sampling and cosmological parameter errors. Violations of consistency could indicate systematic errors in the data and lens reconstruction or new physics at last scattering, any of which could bias cosmological inferences and delensing for gravitational waves. A second mode provides a weaker consistency check for a spatially flat universe. Our analysis isolates the additional information supplied by lensing in a model-independent manner but is also useful for understanding and forecasting CMB cosmological parameter errors in the extended Λ cold dark matter parameter space of dark energy, curvature, and massive neutrinos. We introduce and test a simple but accurate forecasting technique for this purpose that neither double counts lensing information nor neglects lensing in the observables.
Schwinger mechanism in linear covariant gauges
Aguilar, A C; Papavassiliou, J
2016-01-01
In this work we explore the applicability of a special gluon mass generating mechanism in the context of the linear covariant gauges. In particular, the implementation of the Schwinger mechanism in pure Yang-Mills theories hinges crucially on the inclusion of massless bound-state excitations in the fundamental nonperturbative vertices of the theory. The dynamical formation of such excitations is controlled by a homogeneous linear Bethe-Salpeter equation, whose nontrivial solutions have been studied only in the Landau gauge. Here, the form of this integral equation is derived for general values of the gauge-fixing parameter, under a number of simplifying assumptions that reduce the degree of technical complexity. The kernel of this equation consists of fully-dressed gluon propagators, for which recent lattice data are used as input, and of three-gluon vertices dressed by a single form factor, which is modelled by means of certain physically motivated Ans\\"atze. The gauge-dependent terms contributing to this ke...
Comparison between covariant and orthogonal Lyapunov vectors.
Yang, Hong-liu; Radons, Günter
2010-10-01
Two sets of vectors, covariant Lyapunov vectors (CLVs) and orthogonal Lyapunov vectors (OLVs), are currently used to characterize the linear stability of chaotic systems. A comparison is made to show their similarity and difference, especially with respect to the influence on hydrodynamic Lyapunov modes (HLMs). Our numerical simulations show that in both Hamiltonian and dissipative systems HLMs formerly detected via OLVs survive if CLVs are used instead. Moreover, the previous classification of two universality classes works for CLVs as well, i.e., the dispersion relation is linear for Hamiltonian systems and quadratic for dissipative systems, respectively. The significance of HLMs changes in different ways for Hamiltonian and dissipative systems with the replacement of OLVs with CLVs. For general dissipative systems with nonhyperbolic dynamics the long-wavelength structure in Lyapunov vectors corresponding to near-zero Lyapunov exponents is strongly reduced if CLVs are used instead, whereas for highly hyperbolic dissipative systems the significance of HLMs is nearly identical for CLVs and OLVs. In contrast the HLM significance of Hamiltonian systems is always comparable for CLVs and OLVs irrespective of hyperbolicity. We also find that in Hamiltonian systems different symmetry relations between conjugate pairs are observed for CLVs and OLVs. Especially, CLVs in a conjugate pair are statistically indistinguishable in consequence of the microreversibility of Hamiltonian systems. Transformation properties of Lyapunov exponents, CLVs, and hyperbolicity under changes of coordinate are discussed in appendices.
Covariance and objectivity in mechanics and turbulence
Frewer, Michael
2016-01-01
Form-invariance (covariance) and frame-indifference (objectivity) are two notions in classical continuum mechanics which have attracted much attention and controversy over the past decades. Particularly in turbulence modelling it seems that there still is a need for clarification. The aim and purpose of this study is fourfold: (i) To achieve consensus in general on definitions and principles when trying to establish an invariant theory for modelling constitutive structures and dynamic processes in mechanics, where special focus is put on the principle of Material Frame-Indifference (MFI). (ii) To show that in constitutive modelling MFI can only be regarded as an approximation that needs to be reduced to a weaker statement when trying to advance it to an axiom of nature. (iii) To convince that in dynamical modelling, as in turbulence, MFI may not be utilized as a modelling guideline, not even in an approximative sense. Instead, its reduced form has to be supplemented by a second, independent axiom that include...
Frame Indifferent (Truly Covariant) Formulation of Electrodynamics
Christov, Christo
2010-10-01
The Electromagnetic field is considered from the point of view of mechanics of continuum. It is shown that Maxwell's equations are mathematically strict corollaries form the equation of motions of an elastic incompressible liquid. If the concept of frame-indifference (material invariance) is applied to the model of elastic liquid, then the partial time derivatives have to be replaced by the convective time derivative in the momentum equations, and by the Oldroyd upper-convected derivative in the constitutive relation. The convective/convected terms involve the velocity at a point of the field, and as a result, when deriving the Maxwell form of the equations, one arrives at equations which contain both the terms of Maxwell's equation and the so-called laws of motional EMF: Faraday's, Oersted--Ampere's, and the Lorentz-force law. Thus a unification of the electromagnetism is achieved. Since the new model is frame indifferent, it is truly covariant in the sense that the governing system is invariant when changing to a coordinate frame that can accelerate or even deform in time.
IMPROVED COVARIANCE DRIVEN BLIND SUBSPACE IDENTIFICATION METHOD
ZHANG Zhiyi; FAN Jiangling; HUA Hongxing
2006-01-01
An improved covariance driven subspace identification method is presented to identify the weakly excited modes. In this method, the traditional Hankel matrix is replaced by a reformed one to enhance the identifiability of weak characteristics. The robustness of eigenparameter estimation to noise contamination is reinforced by the improved Hankel matrix. In combination with component energy index (CEI) which indicates the vibration intensity of signal components, an alternative stabilization diagram is adopted to effectively separate spurious and physical modes. Simulation of a vibration system of multiple-degree-of-freedom and experiment of a frame structure subject to wind excitation are presented to demonstrate the improvement of the proposed blind method. The performance of this blind method is assessed in terms of its capability in extracting the weak modes as well as the accuracy of estimated parameters. The results have shown that the proposed blind method gives a better estimation of the weak modes from response signals of small signal to noise ratio (SNR)and gives a reliable separation of spurious and physical estimates.
On the bilinear covariants associated to mass dimension one spinors
Silva, J.M.H. da; Villalobos, C.H.C.; Rogerio, R.J.B. [DFQ, UNESP, Guaratingueta, SP (Brazil); Scatena, E. [Universidade Federal de Santa Catarina-CEE, Blumenau, SC (Brazil)
2016-10-15
In this paper we approach the issue of Clifford algebra basis deformation, allowing for bilinear covariants associated to Elko spinors which satisfy the Fierz-Pauli-Kofink identities. We present a complete analysis of covariance, taking into account the involved dual structure associated to Elko spinors. Moreover, the possible generalizations to the recently presented new dual structure are performed. (orig.)
Validity of covariance models for the analysis of geographical variation
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained...
Perturbative approach to covariance matrix of the matter power spectrum
Mohammed, Irshad [Fermilab; Seljak, Uros [UC, Berkeley, Astron. Dept.; Vlah, Zvonimir [Stanford U., ITP
2016-06-30
We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up to $k \\sim 1 h {\\rm Mpc^{-1}}$. We show that all the connected components are dominated by the large-scale modes ($k<0.1 h {\\rm Mpc^{-1}}$), regardless of the value of the wavevectors $k,\\, k'$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.
Validity of covariance models for the analysis of geographical variation
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
attention lately and show that the conditions under which they are valid mathematical models have been overlooked so far. 3. We provide rigorous results for the construction of valid covariance models in this family. 4. We also outline how to construct alternative covariance models for the analysis...
Covariation Is a Poor Measure of Molecular Coevolution.
Talavera, David; Lovell, Simon C; Whelan, Simon
2015-09-01
Recent developments in the analysis of amino acid covariation are leading to breakthroughs in protein structure prediction, protein design, and prediction of the interactome. It is assumed that observed patterns of covariation are caused by molecular coevolution, where substitutions at one site affect the evolutionary forces acting at neighboring sites. Our theoretical and empirical results cast doubt on this assumption. We demonstrate that the strongest coevolutionary signal is a decrease in evolutionary rate and that unfeasibly long times are required to produce coordinated substitutions. We find that covarying substitutions are mostly found on different branches of the phylogenetic tree, indicating that they are independent events that may or may not be attributable to coevolution. These observations undermine the hypothesis that molecular coevolution is the primary cause of the covariation signal. In contrast, we find that the pairs of residues with the strongest covariation signal tend to have low evolutionary rates, and that it is this low rate that gives rise to the covariation signal. Slowly evolving residue pairs are disproportionately located in the protein's core, which explains covariation methods' ability to detect pairs of residues that are close in three dimensions. These observations lead us to propose the "coevolution paradox": The strength of coevolution required to cause coordinated changes means the evolutionary rate is so low that such changes are highly unlikely to occur. As modern covariation methods may lead to breakthroughs in structural genomics, it is critical to recognize their biases and limitations.
A pure S-wave covariant model for the nucleon
Gross, F; Peña, M T; Gross, Franz
2006-01-01
Using the manifestly covariant spectator theory, and modeling the nucleon as a system of three constituent quarks with their own electromagnetic structure, we show that all four nucleon electromagnetic form factors can be very well described by a manifestly covariant nucleon wave function with zero orbital angular momentum.
On the bilinear covariants associated to mass dimension one spinors
da Silva, J M Hoff; Rogerio, R J Bueno; Scatena, E
2016-01-01
In this paper we approach the issue of Clifford algebra basis deformation, allowing for bilinear covariants associated to Elko spinors which satisfy the Fierz-Pauli-Kofink identities. We present a complete analysis of covariance, taking into account the involved dual structure associated to Elko. Moreover, the possible generalizations to the recently presented new dual structure are performed.
Perturbative approach to covariance matrix of the matter power spectrum
Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir
2017-04-01
We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ˜ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalize...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Gaussian covariance matrices for anisotropic galaxy clustering measurements
Grieb, Jan Niklas; Salazar-Albornoz, Salvador; Vecchia, Claudio dalla
2015-01-01
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. In the era of precision cosmology, accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. For cases where only a limited set of simulations is available, assessing the data covariance is not possible or only leads to a noisy estimate. Also, relying on simulated realisations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these two points in mind, this work aims at presenting a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements f...
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
High-dimensional covariance matrix estimation in approximate factor models
Fan, Jianqing; Mincheva, Martina; 10.1214/11-AOS944
2012-01-01
The variance--covariance matrix plays a central role in the inferential theories of high-dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu [J. Amer. Statist. Assoc. 106 (2011) 672--684], taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studi...
Newton law in covariant unimodular $F(R)$ gravity
Nojiri, S; Oikonomou, V K
2016-01-01
We propose a covariant ghost-free unimodular $F(R)$ gravity theory, which contains a three-form field and study its structure using the analogy of the proposed theory with a quantum system which describes a charged particle in uniform magnetic field. Newton's law in non-covariant unimodular $F(R)$ gravity as well as in unimodular Einstein gravity is derived and it is shown to be just the same as in General Relativity. The derivation of Newton's law in covariant unimodular $F(R)$ gravity shows that it is modified precisely in the same way as in the ordinary $F(R)$ theory. We also demonstrate that the cosmology of a Friedmann-Robertson-Walker background, is equivalent in the non-covariant and covariant formulations of unimodular $F(R)$ theory.
Truccolo, Wilson; Eden, Uri T; Fellows, Matthew R; Donoghue, John P; Brown, Emery N
2005-02-01
Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance.
Fibre-optics implementation of asymmetric phase-covariant quantum cloner
Bartuskova, L; Dusek, M; Fiurasek, J; Soubusta, J; Bartuskova, Lucie; Cernoch, Antonin; Dusek, Miloslav; Fiurasek, Jaromir; Soubusta, Jan
2006-01-01
We present the experimental realization of optimal symmetric and asymmetric phase-covariant 1->2 cloning of qubit states using fiber optics. State of each qubit is encoded into a single photon which can propagate through two optical fibers. The operation of our device is based on one- and two-photon interference. We have demonstrated creation of two copies of any state of a qubit from the equator of the Bloch sphere. The measured fidelities of both copies are close to the theoretical values and they surpass the theoretical maximum obtainable with the universal cloner.
Fiber-optics implementation of an asymmetric phase-covariant quantum cloner.
Bartůsková, Lucie; Dusek, Miloslav; Cernoch, Antonín; Soubusta, Jan; Fiurásek, Jaromír
2007-09-21
We present the experimental realization of optimal symmetric and asymmetric phase-covariant 1-->2 cloning of qubit states using fiber optics. The state of each qubit is encoded into a single photon which can propagate through two optical fibers. The operation of our device is based on one- and two-photon interference. We have demonstrated the creation of two copies for a wide range of qubit states from the equator of the Bloch sphere. The measured fidelities of both copies are close to the theoretical values and they surpass the theoretical maximum obtainable with the universal cloner.
Recurrence Analysis of Eddy Covariance Fluxes
Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael
2015-04-01
The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.
Schwinger mechanism in linear covariant gauges
Aguilar, A. C.; Binosi, D.; Papavassiliou, J.
2017-02-01
In this work we explore the applicability of a special gluon mass generating mechanism in the context of the linear covariant gauges. In particular, the implementation of the Schwinger mechanism in pure Yang-Mills theories hinges crucially on the inclusion of massless bound-state excitations in the fundamental nonperturbative vertices of the theory. The dynamical formation of such excitations is controlled by a homogeneous linear Bethe-Salpeter equation, whose nontrivial solutions have been studied only in the Landau gauge. Here, the form of this integral equation is derived for general values of the gauge-fixing parameter, under a number of simplifying assumptions that reduce the degree of technical complexity. The kernel of this equation consists of fully dressed gluon propagators, for which recent lattice data are used as input, and of three-gluon vertices dressed by a single form factor, which is modeled by means of certain physically motivated Ansätze. The gauge-dependent terms contributing to this kernel impose considerable restrictions on the infrared behavior of the vertex form factor; specifically, only infrared finite Ansätze are compatible with the existence of nontrivial solutions. When such Ansätze are employed, the numerical study of the integral equation reveals a continuity in the type of solutions as one varies the gauge-fixing parameter, indicating a smooth departure from the Landau gauge. Instead, the logarithmically divergent form factor displaying the characteristic "zero crossing," while perfectly consistent in the Landau gauge, has to undergo a dramatic qualitative transformation away from it, in order to yield acceptable solutions. The possible implications of these results are briefly discussed.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Gabriel Marcos Vieira Oliveira
2015-09-01
Full Text Available The aim of the study was to establish hypsometric equations for unmanaged Eucalyptus spp. in old age. For this purpose we measured the diameter and height of 513 stems distributed in 11 species and the hypsometric relationship was established by six regression models, being selected the one with the best Akaike Information Criterion (AIC, standard error of estimative (Syx, Maximum Likelihood Ratio Test and Residual Graphical Analysis. Subsequently, the best model has undergone the inclusion of the covariates stem quality (Qf and Species (Sp by means of the decomposition of its parameters. Under these conditions, the model of Chapman and Richards showed the best performance in both modeling approaches. When compared both models, we observed a reduction of 71 AIC units and 7.4% in Syx and a significant improvement in all aspects of the residual distribution in the model with covariates. The results show that it is possible to provide hypsometric equations suitable for unmanaged Eucalyptus in old age, with and without addition of covariates, and the last technique has provided significant improvement in the quality of fit of the models.
Generalized linear models with coarsened covariates: a practical Bayesian approach.
Johnson, Timothy R; Wiest, Michelle M
2014-06-01
Coarsened covariates are a common and sometimes unavoidable phenomenon encountered in statistical modeling. Covariates are coarsened when their values or categories have been grouped. This may be done to protect privacy or to simplify data collection or analysis when researchers are not aware of their drawbacks. Analyses with coarsened covariates based on ad hoc methods can compromise the validity of inferences. One valid method for accounting for a coarsened covariate is to use a marginal likelihood derived by summing or integrating over the unknown realizations of the covariate. However, algorithms for estimation based on this approach can be tedious to program and can be computationally expensive. These are significant obstacles to their use in practice. To overcome these limitations, we show that when expressed as a Bayesian probability model, a generalized linear model with a coarsened covariate can be posed as a tractable missing data problem where the missing data are due to censoring. We also show that this model is amenable to widely available general-purpose software for simulation-based inference for Bayesian probability models, providing researchers a very practical approach for dealing with coarsened covariates.
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...... that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables......The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements...
Methane fluxes above the Hainich forest by True Eddy Accumulation and Eddy Covariance
Siebicke, Lukas; Gentsch, Lydia; Knohl, Alexander
2016-04-01
Understanding the role of forests for the global methane cycle requires quantifying vegetation-atmosphere exchange of methane, however observations of turbulent methane fluxes remain scarce. Here we measured turbulent fluxes of methane (CH4) above a beech-dominated old-growth forest in the Hainich National Park, Germany, and validated three different measurement approaches: True Eddy Accumulation (TEA, closed-path laser spectroscopy), and eddy covariance (EC, open-path and closed-path laser spectroscopy, respectively). The Hainich flux tower is a long-term Fluxnet and ICOS site with turbulent fluxes and ecosystem observations spanning more than 15 years. The current study is likely the first application of True Eddy Accumulation (TEA) for the measurement of turbulent exchange of methane and one of the very few studies comparing open-path and closed-path eddy covariance (EC) setups side-by-side. We observed uptake of methane by the forest during the day (a methane sink with a maximum rate of 0.03 μmol m-2 s-1 at noon) and no or small fluxes of methane from the forest to the atmosphere at night (a methane source of typically less than 0.01 μmol m-2 s-1) based on continuous True Eddy Accumulation measurements in September 2015. First results comparing TEA to EC CO2 fluxes suggest that True Eddy Accumulation is a valid option for turbulent flux quantifications using slow response gas analysers (here CRDS laser spectroscopy, other potential techniques include mass spectroscopy). The TEA system was one order of magnitude more energy efficient compared to closed-path eddy covariance. The open-path eddy covariance setup required the least amount of user interaction but is often constrained by low signal-to-noise ratios obtained when measuring methane fluxes over forests. Closed-path eddy covariance showed good signal-to-noise ratios in the lab, however in the field it required significant amounts of user intervention in addition to a high power consumption. We conclude
Reality conditions for Ashtekar gravity from Lorentz-covariant formulation
Alexandrov, Sergei [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, 3508 TD Utrecht (Netherlands)
2006-03-21
We study the limit of the Lorentz-covariant canonical formulation where the Immirzi parameter approaches {beta} = i. We show that, formulated in terms of a shifted spacetime connection, which also plays a crucial role in the covariant quantization, the limit is smooth and reproduces the canonical structure of the self-dual Ashtekar gravity. The reality conditions of Ashtekar gravity can be incorporated by means of the Dirac brackets derived from the covariant formulation and defined on an extended phase space which involves, besides the self-dual variables, also their anti-self-dual counterparts.
Poincaré covariance of relativistic quantum position
Farkas, S; Weiner, M D; Farkas, Sz.
2002-01-01
A great number of problems of relativistic position in quantum mechanics are due to the use of coordinates which are not inherent objects of spacetime, cause unnecessary complications and can lead to misconceptions. We apply a coordinate-free approach to rule out such problems. Thus it will be clear, for example, that the Lorentz covariance of position, required usually on the analogy of Lorentz covariance of spacetime coordinates, is not well posed and we show that in a right setting the Newton--Wigner position is Poincar\\'e covariant, in contradiction with the usual assertions.
Bayes linear covariance matrix adjustment for multivariate dynamic linear models
Wilkinson, Darren J
2008-01-01
A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.
Covariate-adjusted measures of discrimination for survival data
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
MOTIVATION: Discrimination statistics describe the ability of a survival model to assign higher risks to individuals who experience earlier events: examples are Harrell's C-index and Royston and Sauerbrei's D, which we call the D-index. Prognostic covariates whose distributions are controlled...... by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Waldo, S.; Beaulieu, J. J.; Walker, J. T.
2016-12-01
Reservoirs are a globally important source of carbon to the atmosphere. Several recent studies have found that both carbon dioxide (CO2) and methane (CH4) emissions from reservoirs are currently being underestimated by up to 50%. This underestimation is due to inadequate characterization of both spatial variability (e.g. ebullition and CO2 surface water concentration hot spots) and temporal variability (e.g. diurnal patterns, seasonal differences, and pulses driven by weather events or other disturbances). Use of the eddy covariance technique to measure CO2 and CH4 fluxes over reservoirs can help address the issues of spatial and temporal coverage. Here we present results from two eddy covariance measurement campaigns monitoring CO2 and CH4 fluxes over reservoirs in southwestern Ohio, US. The first campaign examined the effects of water level drawdown on reservoir CH4 ebullition. The eddy covariance results showed a clear response of CH4 emissions to the change in water level, increasing from a baseline of 3440 mg CH4 m-2 d-1 to a maximum of 6740 mg CH4 m-2 d-1 during the drawdown. These results agreed well with the emission rates measured via bubble samplers deployed in the vicinity of the tower. Conversely, the CO2 fluxes did not show a strong response to the drawdown. The eddy covariance system was deployed for a longer period of time during a second campaign at a mid-sized (2.4 km2) lake. Analyses of diurnal patterns in CO2 and CH4 emissions as well as emission response to synoptic events will be presented. Our results contribute to the ongoing effort to better interpret and scale-up CH4 and CO2 emissions from reservoirs.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
AFCI-2.0 Neutron Cross Section Covariance Library
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
The bispectrum covariance beyond Gaussianity: A log-normal approach
Martin, Sandra; Simon, Patrick
2011-01-01
To investigate and specify the statistical properties of cosmological fields with particular attention to possible non-Gaussian features, accurate formulae for the bispectrum and the bispectrum covariance are required. The bispectrum is the lowest-order statistic providing an estimate for non-Gaussianities of a distribution, and the bispectrum covariance depicts the errors of the bispectrum measurement and their correlation on different scales. Currently, there do exist fitting formulae for the bispectrum and an analytical expression for the bispectrum covariance, but the former is not very accurate and the latter contains several intricate terms and only one of them can be readily evaluated from the power spectrum of the studied field. Neglecting all higher-order terms results in the Gaussian approximation of the bispectrum covariance. We study the range of validity of this Gaussian approximation for two-dimensional non-Gaussian random fields. For this purpose, we simulate Gaussian and non-Gaussian random fi...
Electron localization functions and local measures of the covariance
Paul W Ayers
2005-09-01
The electron localization measure proposed by Becke and Edgecombe is shown to be related to the covariance of the electron pair distribution. Just as with the electron localization function, the local covariance does not seem to be, in and of itself, a useful quantity for elucidating shell structure. A function of the local covariance, however, is useful for this purpose. A different function, based on the hyperbolic tangent, is proposed to elucidate the shell structure encapsulated by the local covariance; this function also seems to work better for the electron localization measure of Becke and Edgecombe. In addition, we propose a different measure for the electron localization that incorporates both the electron localization measure of Becke and Edgecombe and the Laplacian of the electron density; preliminary indications are that this measure is especially good at elucidating the shell structure in valence regions. Methods for evaluating electron localization functions directly from the electron density, without recourse to the Kohn-Sham orbitals, are discussed.
AFCI-2.0 Neutron Cross Section Covariance Library
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
Progress of Covariance Evaluation at the China Nuclear Data Center
Xu, R., E-mail: xuruirui@ciae.ac.cn [China Nuclear Data Center, P.O. Box, 275(41), Beijing 102413 (China); Zhang, Q. [China Nuclear Data Center, P.O. Box, 275(41), Beijing 102413 (China); Shanxi Normal University, Linfen, Shanxi Province 041004 (China); Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B. [China Nuclear Data Center, P.O. Box, 275(41), Beijing 102413 (China); Tang, G. [Peking University, Beijing 100871 (China)
2015-01-15
Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.
Lorentz Covariant Canonical Symplectic Algorithms for Dynamics of Charged Particles
Wang, Yulei; Qin, Hong
2016-01-01
In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSA) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of discrete symplectic structure and Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which is difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when th...
Group Lasso estimation of high-dimensional covariance matrices
Bigot, Jérémie; Loubes, Jean-Michel; Alvarez, Lilian Muniz
2010-01-01
In this paper, we consider the Group Lasso estimator of the covariance matrix of a stochastic process corrupted by an additive noise. We propose to estimate the covariance matrix in a high-dimensional setting under the assumption that the process has a sparse representation in a large dictionary of basis functions. Using a matrix regression model, we propose a new methodology for high-dimensional covariance matrix estimation based on empirical contrast regularization by a group Lasso penalty. Using such a penalty, the method selects a sparse set of basis functions in the dictionary used to approximate the process, leading to an approximation of the covariance matrix into a low dimensional space. Consistency of the estimator is studied in Frobenius and operator norms and an application to sparse PCA is proposed.
Trouble shooting for covariance fitting in highly correlated data
Yoon, Boram; Lee, Weonjong; Jung, Chulwoo
2011-01-01
We report a possible solution to the trouble that the covariance fitting fails when the data is highly correlated and the covariance matrix has small eigenvalues. As an example, we choose the data analysis of highly correlated $B_K$ data on the basis of the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have an accurate fitting function so that we cannot fit the highly correlated and precise data. When some eigenvalues of the covariance matrix are small, even a tiny error of fitting function can produce large chi-square and spoil the fitting procedure. We have applied a number of prescriptions available in the market such as diagonal approximation and cutoff method. In addition, we present a new method, the eigenmode shift method which fine-tunes the fitting function while keeping the covariance matrix untouched.
Covariance fitting of highly correlated $B_K$ data
Yoon, Boram; Jung, Chulwoo; Lee, Weonjong
2011-01-01
We present the reason why we use the diagonal approximation (uncorrelated fitting) when we perform the data analysis of highly correlated $B_K$ data on the basis of the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have enough statistics to determine the small eigenvalues of the covariance matrix with a high precision. As a result, we have the smallest eigenvalue, which is smaller than the statistical error of the covariance matrix, corresponding to an unphysical eigenmode. We have applied a number of prescriptions available in the market such as the cutoff method and modified covariance matrix method. It turns out that the cutoff method is not a good prescription and the modified covariance matrix method is an even worse one. The diagonal approximation turns out to be a good prescription if the data points are somehow correlated and the statistics are relatively poor.
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
Covariance of metabolic and hemostatic risk indicators in men and women
Riese, H; Vrijkotte, TGM; Meijer, P; Kluft, C; de Geus, Eco J.
2001-01-01
Background and objective: Multivariate analyses on clusters of metabolic and hemostatic risk indicators implicitly assume good test-retest reliability of these variables, substantial covariance among the various indicators, stability of covariance structure over time, and comparable covariance struc
Some covariance models based on normal scale mixtures
Schlather, Martin
2011-01-01
Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.
Web Tool for Constructing a Covariance Matrix from EXFOR Uncertainties
Zerkin V.
2012-05-01
Full Text Available The experimental nuclear reaction database EXFOR contains almost no covariance data because most experimentalists provide experimental data only with uncertainties. With the tool described here a user can construct an experimental covariance matrix from uncertainties using general assumptions when uncertainty information given in EXFOR is poor (or even absent. The tool is publically available in the IAEA EXFOR Web retrieval system [1].
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
Comparison of Methods for Handling Missing Covariate Data
Johansson, Åsa M.; Karlsson, Mats O
2013-01-01
Missing covariate data is a common problem in nonlinear mixed effects modelling of clinical data. The aim of this study was to implement and compare methods for handling missing covariate data in nonlinear mixed effects modelling under different missing data mechanisms. Simulations generated data for 200 individuals with a 50% difference in clearance between males and females. Three different types of missing data mechanisms were simulated and information about sex was missing for 50% of the ...
High-dimensional covariance matrix estimation with missing observations
Lounici, Karim
2014-01-01
In this paper, we study the problem of high-dimensional covariance matrix estimation with missing observations. We propose a simple procedure computationally tractable in high-dimension and that does not require imputation of the missing data. We establish non-asymptotic sparsity oracle inequalities for the estimation of the covariance matrix involving the Frobenius and the spectral norms which are valid for any setting of the sample size, probability of a missing observation and the dimensio...
Perturbative approach to covariance matrix of the matter power spectrum
Mohammed, Irshad; Vlah, Zvonimir
2016-01-01
We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up to $k \\sim 1 h {\\rm Mpc^{-1}}$. We show that all the connected components are dominated by the large-scale modes ($k<0.1 h {\\rm Mpc^{-1}}$), regardless of the value of the wavevectors $k,\\, k'$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. The full cova...
[Clinical research XIX. From clinical judgment to analysis of covariance].
Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Moreno, Jorge; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2014-01-01
The analysis of covariance (ANCOVA) is based on the general linear models. This technique involves a regression model, often multiple, in which the outcome is presented as a continuous variable, the independent variables are qualitative or are introduced into the model as dummy or dichotomous variables, and factors for which adjustment is required (covariates) can be in any measurement level (i.e. nominal, ordinal or continuous). The maneuvers can be entered into the model as 1) fixed effects, or 2) random effects. The difference between fixed effects and random effects depends on the type of information we want from the analysis of the effects. ANCOVA effect separates the independent variables from the effect of co-variables, i.e., corrects the dependent variable eliminating the influence of covariates, given that these variables change in conjunction with maneuvers or treatments, affecting the outcome variable. ANCOVA should be done only if it meets three assumptions: 1) the relationship between the covariate and the outcome is linear, 2) there is homogeneity of slopes, and 3) the covariate and the independent variable are independent from each other.
Computational protein design quantifies structural constraints on amino acid covariation.
Noah Ollikainen
Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.
Gaussian covariance matrices for anisotropic galaxy clustering measurements
Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio
2016-04-01
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.
Covariance fitting of highly-correlated data in lattice QCD
Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong
2013-07-01
We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Structural covariance of the neostriatum with regional gray matter volumes.
Soriano-Mas, C; Harrison, B J; Pujol, J; López-Solà, M; Hernández-Ribas, R; Alonso, P; Contreras-Rodríguez, O; Giménez, M; Blanco-Hinojo, L; Ortiz, H; Deus, J; Menchón, J M; Cardoner, N
2013-05-01
The caudate and putamen nuclei have been traditionally divided into dorsal and ventral territories based on their segregated patterns of functional and anatomical connectivity with distributed cortical regions. Activity-dependent structural plasticity may potentially lead to the development of regional volume correlations, or structural covariance, between the different components of each cortico-striatal circuit. Here, we studied the whole-brain structural covariance patterns of four neostriatal regions belonging to distinct cortico-striatal circuits. We also assessed the potential modulating influence of laterality, age and gender. T1-weighted three-dimensional magnetic resonance images were obtained from ninety healthy participants (50 females). Following data pre-processing, the mean signal value per hemisphere was calculated for the 'seed' regions of interest, located in the dorsal and ventral caudate and the dorsal-caudal and ventral-rostral putamen. Statistical parametric mapping was used to estimate whole-brain voxel-wise structural covariance patterns for each striatal region, controlling for the shared anatomical variance between regions in order to obtain maximally specific structural covariance patterns. As predicted, segregated covariance patterns were observed. Age was found to be a relevant modulator of the covariance patterns of the right caudate regions, while laterality effects were observed for the dorsal-caudal putamen. Gender effects were only observed via an interaction with age. The different patterns of structural covariance are discussed in detail, as well as their similarities with the functional and anatomical connectivity patterns reported for the same striatal regions in other studies. Finally, the potential mechanisms underpinning the phenomenon of volume correlations between distant cortico-striatal structures are also discussed.
Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
Hossain, Shahadut; Gustafson, Paul
2009-05-15
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik
2016-01-01
. This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Covariate-adjusted confidence interval for the intraclass correlation coefficient.
Shoukri, Mohamed M; Donner, Allan; El-Dali, Abdelmoneim
2013-09-01
A crucial step in designing a new study is to estimate the required sample size. For a design involving cluster sampling, the appropriate sample size depends on the so-called design effect, which is a function of the average cluster size and the intracluster correlation coefficient (ICC). It is well-known that under the framework of hierarchical and generalized linear models, a reduction in residual error may be achieved by including risk factors as covariates. In this paper we show that the covariate design, indicating whether the covariates are measured at the cluster level or at the within-cluster subject level affects the estimation of the ICC, and hence the design effect. Therefore, the distinction between these two types of covariates should be made at the design stage. In this paper we use the nested-bootstrap method to assess the accuracy of the estimated ICC for continuous and binary response variables under different covariate structures. The codes of two SAS macros are made available by the authors for interested readers to facilitate the construction of confidence intervals for the ICC. Moreover, using Monte Carlo simulations we evaluate the relative efficiency of the estimators and evaluate the accuracy of the coverage probabilities of a 95% confidence interval on the population ICC. The methodology is illustrated using a published data set of blood pressure measurements taken on family members.
Structural and Maturational Covariance in Early Childhood Brain Development.
Geng, Xiujuan; Li, Gang; Lu, Zhaohua; Gao, Wei; Wang, Li; Shen, Dinggang; Zhu, Hongtu; Gilmore, John H
2017-03-01
Brain structural covariance networks (SCNs) composed of regions with correlated variation are altered in neuropsychiatric disease and change with age. Little is known about the development of SCNs in early childhood, a period of rapid cortical growth. We investigated the development of structural and maturational covariance networks, including default, dorsal attention, primary visual and sensorimotor networks in a longitudinal population of 118 children after birth to 2 years old and compared them with intrinsic functional connectivity networks. We found that structural covariance of all networks exhibit strong correlations mostly limited to their seed regions. By Age 2, default and dorsal attention structural networks are much less distributed compared with their functional maps. The maturational covariance maps, however, revealed significant couplings in rates of change between distributed regions, which partially recapitulate their functional networks. The structural and maturational covariance of the primary visual and sensorimotor networks shows similar patterns to the corresponding functional networks. Results indicate that functional networks are in place prior to structural networks, that correlated structural patterns in adult may arise in part from coordinated cortical maturation, and that regional co-activation in functional networks may guide and refine the maturation of SCNs over childhood development. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Newton law in covariant unimodular F(R) gravity
Nojiri, S.; Odintsov, S. D.; Oikonomou, V. K.
2016-09-01
We investigate the Newton law in the unimodular F(R) gravity. In the standard F(R) gravity, due to the extra scalar mode, there often appear the large corrections to the Newton law and such models are excluded by the experiments and/or the observations. In the unimodular F(R) gravity, however, the extra scalar mode become not to be dynamical due to the unimodular constraint and there is not any correction to the Newton law. Even in the unimodular Einstein gravity, the Newton law is reproduced but the mechanism is a little bit different from that in the unimodular F(R) gravity. We also investigate the unimodular F(R) gravity in the covariant formulation. In the covariant formulation, we include the three-form field. We show that the three-form field could not have any unwanted property, like ghost nor correction to the Newton law. In the covariant formulation, however, the above extra scalar mode becomes dynamical and could give a correction to the Newton law. We also show that there are no difference in the Friedmann-Robertson-Walker (FRW) dynamics in the non-covariant and covariant formulation.
Covariance and correlation estimation in electron-density maps.
Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna
2012-03-01
Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.
A simple procedure for the comparison of covariance matrices.
Garcia, Carlos
2012-11-21
Comparing the covariation patterns of populations or species is a basic step in the evolutionary analysis of quantitative traits. Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation. I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences. The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure.
A simple procedure for the comparison of covariance matrices
2012-01-01
Background Comparing the covariation patterns of populations or species is a basic step in the evolutionary analysis of quantitative traits. Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation. Results I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences. Conclusions The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure. PMID:23171139
A simple procedure for the comparison of covariance matrices
Garcia Carlos
2012-11-01
Full Text Available Abstract Background Comparing the covariation patterns of populations or species is a basic step in the evolutionary analysis of quantitative traits. Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation. Results I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences. Conclusions The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure.
Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection.
Xu, M; Paul, M R
2016-06-01
We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20≲D_{λ}≲50, and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Sparse reduced-rank regression with covariance estimation
Chen, Lisha
2014-12-08
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Full covariance of CMB and lensing reconstruction power spectra
Peloton, Julien; Lewis, Antony; Carron, Julien; Zahn, Oliver
2016-01-01
CMB and lensing reconstruction power spectra are powerful probes of cosmology. However they are correlated, since the CMB power spectra are lensed and the lensing reconstruction is constructed using CMB multipoles. We perform a full analysis of the auto- and cross-covariances, including polarization power spectra and minimum variance lensing estimators, and compare with simulations of idealized future CMB-S4 observations. Covariances sourced by fluctuations in the unlensed CMB and instrumental noise can largely be removed by using a realization-dependent subtraction of lensing reconstruction noise, leaving a relatively simple covariance model that is dominated by lensing-induced terms and well described by a small number of principal components. The correlations between the CMB and lensing power spectra will be detectable at the level of $\\sim 5\\sigma$ for a CMB-S4 mission, and neglecting those could underestimate some parameter error bars by several tens of percent. However we found that the inclusion of ext...
Femtosecond Studies Of Coulomb Explosion Utilizing Covariance Mapping
Card, D A
2000-01-01
The studies presented herein elucidate details of the Coulomb explosion event initiated through the interaction of molecular clusters with an intense femtosecond laser beam (≥1 PW/cm2). Clusters studied include ammonia, titanium-hydrocarbon, pyridine, and 7-azaindole. Covariance analysis is presented as a general technique to study the dynamical processes in clusters and to discern whether the fragmentation channels are competitive. Positive covariance determinations identify concerted processes such as the concomitant explosion of protonated cluster ions of asymmetrical size. Anti- covariance mapping is exploited to distinguish competitive reaction channels such as the production of highly charged nitrogen atoms formed at the expense of the protonated members of a cluster ion ensemble. This technique is exemplified in each cluster system studied. Kinetic energy analyses, from experiment and simulation, are presented to fully understand the Coulomb explosion event. A cutoff study strongly suggests that...
Extreme eigenvalues of sample covariance and correlation matrices
Heiny, Johannes
This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...... of the eigenvalues. In the second part, we show that the largest and smallest eigenvalues of a highdimensional sample correlation matrix possess almost sure non-random limits if the truncated variance of the entry distribution is “almost slowly varying”, a condition we describe via moment properties of self...
Data Covariances from R-Matrix Analyses of Light Nuclei
Hale, G.M., E-mail: ghale@lanl.gov; Paris, M.W.
2015-01-15
After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.
Residual noise covariance for Planck low-resolution data analysis
Keskitalo, R; Cabella, P; Kisner, T; Poutanen, T; Stompor, R; Bartlett, J G; Borrill, J; Cantalupo, C; De Gasperis, G; De Rosa, A; de Troia, G; Eriksen, H K; Finelli, F; Górski, K M; Gruppuso, A; Hivon, E; Jaffe, A; Keihanen, E; Kurki-Suonio, H; Lawrence, C R; Natoli, P; Paci, F; Polenta, G; Rocha, G
2009-01-01
Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that...
Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme
Hickmann, K. S.; Godinez, H. C.
2015-12-01
When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Seljak, Uroš; Slosar, Anže; Gonzalez, Jose Vazquez
2016-01-01
We study the covariance properties of real space correlation function estimators -- primarily galaxy-shear correlations, or galaxy-galaxy lensing -- using SDSS data for both shear catalogs and lenses (specifically the BOSS LOWZ sample). Using mock catalogs of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the density field instead of the over-density field, and that this leads to a significant error increase due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the over-density, we find that the errors are dominated by the shape noise and lens clustering, that empirically estimated covarianc...
Flavour Covariant Transport Equations: an Application to Resonant Leptogenesis
Dev, P S Bhupal; Pilaftsis, Apostolos; Teresi, Daniele
2014-01-01
We present a fully flavour-covariant formalism for transport phenomena, by deriving Markovian master equations that describe the time-evolution of particle number densities in a statistical ensemble with arbitrary flavour content. As an application of this general formalism, we study flavour effects in a scenario of resonant leptogenesis (RL) and obtain the flavour-covariant evolution equations for heavy-neutrino and lepton number densities. This provides a complete and unified description of RL, capturing three relevant physical phenomena: (i) the resonant mixing between the heavy-neutrino states, (ii) coherent oscillations between different heavy-neutrino flavours, and (iii) quantum decoherence effects in the charged-lepton sector. To illustrate the importance of this formalism, we numerically solve the flavour-covariant rate equations for a minimal RL model and show that the total lepton asymmetry can be enhanced up to one order of magnitude, as compared to that obtained from flavour-diagonal or partially ...
Covariance measurement in the presence of non-synchronous trading and market microstructure noise
Griffin, J.E.; Oomen, R.C.A.
2011-01-01
This paper studies the problem of covariance estimation when prices are observed non-synchronously and contaminated by i.i.d. microstructure noise. We derive closed form expressions for the bias and variance of three popular covariance estimators, namely realised covariance, realised covariance plus
Neutron Cross Section Covariances for Structural Materials and Fission Products
Hoblit, S.; Cho, Y.-S.; Herman, M.; Mattoon, C. M.; Mughabghab, S. F.; Obložinský, P.; Pigni, M. T.; Sonzogni, A. A.
2011-12-01
We describe neutron cross section covariances for 78 structural materials and fission products produced for the new US evaluated nuclear reaction library ENDF/B-VII.1. Neutron incident energies cover full range from 10 eV to 20 MeV and covariances are primarily provided for capture, elastic and inelastic scattering as well as (n,2n). The list of materials follows priorities defined by the Advanced Fuel Cycle Initiative, the major application being data adjustment for advanced fast reactor systems. Thus, in addition to 28 structural materials and 49 fission products, the list includes also 23Na which is important fast reactor coolant. Due to extensive amount of materials, we adopted a variety of methodologies depending on the priority of a specific material. In the resolved resonance region we primarily used resonance parameter uncertainties given in Atlas of Neutron Resonances and either applied the kernel approximation to propagate these uncertainties into cross section uncertainties or resorted to simplified estimates based on integral quantities. For several priority materials we adopted MF32 covariances produced by SAMMY at ORNL, modified by us by adding MF33 covariances to account for systematic uncertainties. In the fast neutron region we resorted to three methods. The most sophisticated was EMPIRE-KALMAN method which combines experimental data from EXFOR library with nuclear reaction modeling and least-squares fitting. The two other methods used simplified estimates, either based on the propagation of nuclear reaction model parameter uncertainties or on a dispersion analysis of central cross section values in recent evaluated data files. All covariances were subject to quality assurance procedures adopted recently by CSEWG. In addition, tools were developed to allow inspection of processed covariances and computed integral quantities, and for comparing these values to data from the Atlas and the astrophysics database KADoNiS.
Neutron Cross Section Covariances for Structural Materials and Fission Products
Hoblit, S.; Hoblit,S.; Cho,Y.-S.; Herman,M.; Mattoon,C.M.; Mughabghab,S.F.; Oblozinsky,P.; Pigni,M.T.; Sonzogni,A.A.
2011-12-01
We describe neutron cross section covariances for 78 structural materials and fission products produced for the new US evaluated nuclear reaction library ENDF/B-VII.1. Neutron incident energies cover full range from 10{sup -5} eV to 20 MeV and covariances are primarily provided for capture, elastic and inelastic scattering as well as (n,2n). The list of materials follows priorities defined by the Advanced Fuel Cycle Initiative, the major application being data adjustment for advanced fast reactor systems. Thus, in addition to 28 structural materials and 49 fission products, the list includes also {sup 23}Na which is important fast reactor coolant. Due to extensive amount of materials, we adopted a variety of methodologies depending on the priority of a specific material. In the resolved resonance region we primarily used resonance parameter uncertainties given in Atlas of Neutron Resonances and either applied the kernel approximation to propagate these uncertainties into cross section uncertainties or resorted to simplified estimates based on integral quantities. For several priority materials we adopted MF32 covariances produced by SAMMY at ORNL, modified by us by adding MF33 covariances to account for systematic uncertainties. In the fast neutron region we resorted to three methods. The most sophisticated was EMPIRE-KALMAN method which combines experimental data from EXFOR library with nuclear reaction modeling and least-squares fitting. The two other methods used simplified estimates, either based on the propagation of nuclear reaction model parameter uncertainties or on a dispersion analysis of central cross section values in recent evaluated data files. All covariances were subject to quality assurance procedures adopted recently by CSEWG. In addition, tools were developed to allow inspection of processed covariances and computed integral quantities, and for comparing these values to data from the Atlas and the astrophysics database KADoNiS.
Positive semidefinite integrated covariance estimation, factorizations and asynchronicity
Sauri, Orimar; Lunde, Asger; Laurent, Sébastien;
2017-01-01
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many...... observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and confirm their good finite sample properties by means of a Monte Carlo simulation. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We...
The covariant electromagnetic Casimir effect for real conducting spherical shells
Razmi, H
2016-01-01
Using the covariant electromagnetic Casimir effect (previously introduced for real conducting cylindrical shells [1]), the Casimir force experienced by a spherical shell, under Dirichlet boundary condition, is calculated. The renormalization procedure is based on the plasma cut-off frequency for real conductors. The real case of a gold (silver) sphere is considered and the corresponding electromagnetic Casimir force is computed. In the covariant approach, there isn't any decomposition of fields to TE and TM modes; thus, we do not need to consider the Neumann boundary condition in parallel to the Dirichlet problem and then add their corresponding results.
Some Algorithms for the Conditional Mean Vector and Covariance Matrix
John F. Monahan
2006-08-01
Full Text Available We consider here the problem of computing the mean vector and covariance matrix for a conditional normal distribution, considering especially a sequence of problems where the conditioning variables are changing. The sweep operator provides one simple general approach that is easy to implement and update. A second, more goal-oriented general method avoids explicit computation of the vector and matrix, while enabling easy evaluation of the conditional density for likelihood computation or easy generation from the conditional distribution. The covariance structure that arises from the special case of an ARMA(p, q time series can be exploited for substantial improvements in computational efficiency.
A Blind Detection Algorithm Utilizing Statistical Covariance in Cognitive Radio
Yingxue Li
2012-11-01
Full Text Available As the expression of performance parameters are obtained using asymptotic method in most blind covariance detection algorithm, the paper presented a new blind detection algorithm using cholesky factorization. Utilizing random matrix theory, we derived the performance parameters using non-asymptotic method. The proposed method overcomes the noise uncertainty problem and performs well without any information about the channel, primary user and noise. Numerical simulation results demonstrate that the performance parameters expressions are correct and the new detector outperforms the other blind covariance detectors.
On spectral distribution of high dimensional covariation matrices
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
A scale invariant covariance structure on jet space
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2005-01-01
This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As part...... of the derivation, we introduce a blurring operator At that acts on jet space contrary to doing spatial filtering and a scaling operator Ss. The stochastic Brownian image model is an example of a class of functions which are scale invariant with respect to the operators At and Ss. This paper also includes empirical...
Covariant Description of Transformation Optics in Linear and Nonlinear Media
Paul, Oliver
2011-01-01
The technique of transformation optics (TO) is an elegant method for the design of electromagnetic media with tailored optical properties. In this paper, we focus on the formal structure of TO theory. By using a complete covariant formalism, we present a general transformation law that holds for arbitrary materials including bianisotropic, magneto-optical, nonlinear and moving media. Due to the principle of general covariance, the formalism is applicable to arbitrary space-time coordinate transformations and automatically accounts for magneto-electric coupling terms. The formalism is demonstrated for the calculation of the second harmonic generation in a twisted TO concentrator.
Fission yield covariances for JEFF: A Bayesian Monte Carlo method
Leray Olivier
2017-01-01
Full Text Available The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
Neutron Resonance Parameters and Covariance Matrix of 239Pu
Derrien, Herve [ORNL; Leal, Luiz C [ORNL; Larson, Nancy M [ORNL
2008-08-01
In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.
High-dimensional covariance matrix estimation with missing observations
Lounici, Karim
2012-01-01
In this paper, we study the problem of high-dimensional approximately low-rank covariance matrix estimation with missing observations. We propose a simple procedure computationally tractable in high-dimension and that does not require imputation of the missing data. We establish non-asymptotic sparsity oracle inequalities for the estimation of the covariance matrix with the Frobenius and spectral norms, valid for any setting of the sample size and the dimension of the observations. We further establish minimax lower bounds showing that our rates are minimax optimal up to a logarithmic factor.
Estimating surface fluxes using eddy covariance and numerical ogive optimization
Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling;
2015-01-01
Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low...
On Variance and Covariance for Bounded Linear Operators
Chia Shiang LIN
2001-01-01
In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.
A comparison of covariance structure in wild and laboratory muroid crania.
Jamniczky, Heather A; Hallgrímsson, Benedikt
2009-06-01
Mutations have the ability to produce dramatic changes to covariance structure by altering the variance of covariance-generating developmental processes. Several evolutionary mechanisms exist that may be acting interdependently to stabilize covariance structure, despite this developmental potential for variation within species. We explore covariance structure in the crania of laboratory mouse mutants exhibiting mild-to-significant developmental perturbations of the cranium, and contrast it with covariance structure in related wild muroid taxa. Phenotypic covariance structure is conserved among wild muroidea, but highly variable and mutation-dependent within the laboratory group. We show that covariance structures in natural populations of related species occupy a more restricted portion of covariance structure space than do the covariance structures resulting from single mutations of significant effect or the almost nonexistent genetic differences that separate inbred mouse strains. Our results suggest that developmental constraint is not the primary mechanism acting to stabilize covariance structure, and imply a more important role for other mechanisms.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
to the equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...
A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux
Li, M.; Chen, Y.
2010-12-01
Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
Leitão, Sofia; Stadler, Alfred; Peña, M. T.; Biernat, Elmar P.
2017-01-01
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy-light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin-orbit and tensor forces and do not allow to separate the spin-spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark-antiquark interactions.
Leitão, Sofia; Peña, M T; Biernat, Elmar P
2016-01-01
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy-light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin-orbit and tensor forces and do not allow to separate the spin-spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark-antiquark interactions.
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Treatment decisions based on scalar and functional baseline covariates.
Ciarleglio, Adam; Petkova, Eva; Ogden, R Todd; Tarpey, Thaddeus
2015-12-01
The amount and complexity of patient-level data being collected in randomized-controlled trials offer both opportunities and challenges for developing personalized rules for assigning treatment for a given disease or ailment. For example, trials examining treatments for major depressive disorder are not only collecting typical baseline data such as age, gender, or scores on various tests, but also data that measure the structure and function of the brain such as images from magnetic resonance imaging (MRI), functional MRI (fMRI), or electroencephalography (EEG). These latter types of data have an inherent structure and may be considered as functional data. We propose an approach that uses baseline covariates, both scalars and functions, to aid in the selection of an optimal treatment. In addition to providing information on which treatment should be selected for a new patient, the estimated regime has the potential to provide insight into the relationship between treatment response and the set of baseline covariates. Our approach can be viewed as an extension of "advantage learning" to include both scalar and functional covariates. We describe our method and how to implement it using existing software. Empirical performance of our method is evaluated with simulated data in a variety of settings and also applied to data arising from a study of patients with major depressive disorder from whom baseline scalar covariates as well as functional data from EEG are available.
Improved forecasting with leading indicators: the principal covariate index
C. Heij (Christiaan)
2007-01-01
textabstractWe propose a new method of leading index construction that combines the need for data compression with the objective of forecasting. This so-called principal covariate index is constructed to forecast growth rates of the Composite Coincident Index. The forecast performance is compared
Analysis of inadvertent microprocessor lag time on eddy covariance results
Karl Zeller; Gary Zimmerman; Ted Hehn; Evgeny Donev; Diane Denny; Jeff Welker
2001-01-01
Researchers using the eddy covariance approach to measuring trace gas fluxes are often hoping to measure carbon dioxide and energy fluxes for ecosystem intercomparisons. This paper demonstrates a systematic microprocessor- caused lag of 20.1 to 20.2 s in a commercial sonic anemometer-analog-to-digital datapacker system operated at 10 Hz. The result of the inadvertent...
Experimental Uncertainty and Covariance Information in EXFOR Library
Schillebeeckx P.
2012-05-01
Full Text Available Compilation of experimental uncertainty and covariance information in the EXFOR Library is discussed. Following the presentation of a brief history of information provided in the EXFOR Library, the current EXFOR Formats and their limitations are reviewed. Proposed extensions for neutron-induced reaction cross sections in the fast neutron region and resonance region are also presented.
Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)
Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis
We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...
How many longitudinal covariate measurements are needed for risk prediction?
Reinikainen, Jaakko; Karvanen, Juha; Tolonen, Hanna
2016-01-01
In epidemiologic follow-up studies, many key covariates, such as smoking, use of medication, blood pressure, and cholesterol, are time varying. Because of practical and financial limitations, time-varying covariates cannot be measured continuously, but only at certain prespecified time points. We study how the number of these longitudinal measurements can be chosen cost-efficiently by evaluating the usefulness of the measurements for risk prediction. The usefulness is addressed by measuring the improvement in model discrimination between models using different amounts of longitudinal information. We use simulated follow-up data and the data from the Finnish East-West study, a follow-up study, with eight longitudinal covariate measurements carried out between 1959 and 1999. In a simulation study, we show how the variability and the hazard ratio of a time-varying covariate are connected to the importance of remeasurements. In the East-West study, it is seen that for older people, the risk predictions obtained using only every other measurement are almost equivalent to the predictions obtained using all eight measurements. Decisions about the study design have significant effects on the costs. The cost-efficiency can be improved by applying the measures of model discrimination to data from previous studies and simulations. Copyright © 2016 Elsevier Inc. All rights reserved.
Nonlinear wave mechanics from classical dynamics and scale covariance
Hammad, F. [Departement TC-SETI, Universite A.Mira de Bejaia, Route Targa Ouzemmour, 06000 Bejaia (Algeria)], E-mail: fayhammad@yahoo.fr
2007-10-29
Nonlinear Schroedinger equations proposed by Kostin and by Doebner and Goldin are rederived from Nottale's prescription for obtaining quantum mechanics from classical mechanics in nondifferentiable spaces; i.e., from hydrodynamical concepts and scale covariance. Some soliton and plane wave solutions are discussed.
Covariation of spectral and nonlinear EEG measures with alpha biofeedback.
Fell, J.; Elfadil, H.; Klaver, P.; Roschke, J.; Elger, C.E.; Fernandez, G.S.E.
2002-01-01
This study investigated how different spectral and nonlinear EEG measures covaried with alpha power during auditory alpha biofeedback training, performed by 13 healthy subjects. We found a significant positive correlation of alpha power with the largest Lyapunov-exponent, pointing to an increased
Leading order covariant chiral nucleon-nucleon interaction
Ren, Xiu-Lei; Geng, Li-Sheng; Long, Bing-Wei; Ring, Peter; Meng, Jie
2016-01-01
Motivated by the successes of relativistic theories in studies of atomic/molecular and nuclear systems and the strong need for a covariant chiral force in relativistic nuclear structure studies, we develop a new covariant scheme to construct the nucleon-nucleon interaction in the framework of chiral effective field theory. The chiral interaction is formulated up to leading order with a covariant power counting and a Lorentz invariant chiral Lagrangian. We find that the covariant scheme induces all the six invariant spin operators needed to describe the nuclear force, which are also helpful to achieve cutoff independence for certain partial waves. A detailed investigation of the partial wave potentials shows a better description of the scattering phase shifts with low angular momenta than the leading order Weinberg approach. Particularly, the description of the $^1S_0$, $^3P_0$, and $^1P_1$ partial waves is similar to that of the next-to-leading order Weinberg approach. Our study shows that the relativistic fr...
Hawking Radiation from Plane Symmetric Black Hole Covariant Anomaly
ZENG Xiao-Xiong; HAN Yi-Wen; YANG Shu-Zheng
2009-01-01
Based on the covariant anomaly cancellation method, which is believed to be more refined than the initial approach of Robinson and Wilczek, we discuss Hawking radiation from the plane symmetric black hole. The result shows that Hawking radiation from the non-spherical symmetric black holes also can be derived from the viewpoint of anomaly.
On a new normalization for tractor covariant derivatives
Hammerl, Matthias; Soucek, Vladimir; Silhan, Josef
2010-01-01
A regular normal parabolic geometry of type $G/P$ on a manifold $M$ gives rise to sequences $D_i$ of invariant differential operators, known as the curved version of the BGG resolution. These sequences are constructed from the normal covariant derivative $\
Spectral Density of Sample Covariance Matrices of Colored Noise
Dolezal, Emil
2008-01-01
We study the dependence of the spectral density of the covariance matrix ensemble on the power spectrum of the underlying multivariate signal. The white noise signal leads to the celebrated Marchenko-Pastur formula. We demonstrate results for some colored noise signals.
Negative refraction and positive refraction are not Lorentz covariant
Mackay, Tom G., E-mail: T.Mackay@ed.ac.u [School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh, Edinburgh EH9 3JZ (United Kingdom)] [NanoMM - Nanoengineered Metamaterials Group, Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, PA 16802-6812 (United States); Lakhtakia, Akhlesh, E-mail: akhlesh@psu.ed [NanoMM - Nanoengineered Metamaterials Group, Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, PA 16802-6812 (United States)
2009-12-28
Refraction into a half-space occupied by a pseudochiral omega material moving at constant velocity was studied by directly implementing the Lorentz transformations of electric and magnetic fields. Numerical studies revealed that negative refraction, negative phase velocity and counterposition are not Lorentz-covariant phenomenons in general.
Equivalence between the Covariant and Bardeen Perturbation Formalisms
Vitenti, S D P; Pinto-Neto, N
2013-01-01
In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe the cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors that yields an adequate language to treat both perturbative approaches in a common framework. Additionally, we define full non-linear tensors that at first order correspond to the three known gauge invariant variables $\\Phi$, $\\Psi$ and $\\Xi$. We also stress that in the referred covariant approach one necessarily introduces an additional hyper-surface choice to the problem, and the same tensor combinations above at first order are also hyper-surface invariant making the gauge invari...
A Superfield Formalism of osp(1,2) Covariant Quantization
Lavrov, P M
2001-01-01
We propose a superfield description of osp(1,2) covariant quantization by extending the set of admissibility conditions for the quantum action. We realize a superfield form of the generating equations, specify the vacuum functional and obtain the corresponding transformations of extended BRST symmetry.
Modeling the Conditional Covariance between Stock and Bond Returns
P. de Goeij (Peter); W.A. Marquering (Wessel)
2002-01-01
textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for asymmet
Covariate-adjusted measures of discrimination for survival data
White, Ian R.; Rapsomaniki, Eleni; Wannamethee, S. G.; Morris, R. W.; Willeit, J.; Willeit, P.; Santer, P.; Kiechl, S.; Wald, N.; Ebrahim, S.; Lawlor, D. A.; Gallacher, J.; Yarnell, J. W G; Ben-Shlomo, Y.; Casiglia, E.; Tikhonoff, V.; Sutherland, S. E.; Nietert, P. J.; Keil, J. E.; Bachman, D. L.; Psaty, B. M.; Cushman, M.; Nordestgaard, B. G.; Tybjærg-Hansen, A.; Frikke-Schmidt, R.; Giampaoli, S.; Palmieri, L.; Panico, S.; Pilotto, L.; Vanuzzo, D.; Simons, L. A.; Friedlander, Y.; McCallum, J.; Price, J. F.; McLachlan, S.; Taylor, J. O.; Guralnik, J. M.; Wallace, R. B.; Kohout, F. J.; Cornoni-Huntley, J. C.; Guralnik, J. M.; Blazer, D. G.; Guralnik, J. M.; Phillips, C. L.; Phillips, C. L.; Guralnik, J. M.; Wareham, N. J.; Khaw, K. T.; Brenner, H.; Schöttker, B.; Müller, H. T.; Rothenbacher, D.; Nissinen, A.; Donfrancesco, C.; Giampaoli, S.; Harald, K.; Jousilahti, P. R.; Vartiainen, E.; Salomaa, V.; D'Agostino, R. B.; Wolf, P. A.; Vasan, R. S.; Daimon, M.; Oizumi, T.; Kayama, T.; Kato, T.; Chetrit, A.; Dankner, R.; Lubin, F.; Welin, L.; Svärdsudd, K.; Eriksson, H.; Lappas, G.; Lissner, L.; Mehlig, K.; Björkelund, C.; Nagel, D.; Kiyohara, Y.; Arima, H.; Ninomiya, T.; Hata, J.; Rodriguez, B.; Dekker, J. M.; Nijpels, G.; Stehouwer, C. D A; Iso, H.; Kitamura, A.; Yamagishi, K.; Noda, H.; Goldbourt, U.; Kauhanen, J.; Salonen, J. T.; Tuomainen, T. P.; Meade, T. W.; DeStavola, B. L.; Blokstra, A.; Verschuren, W. M M; Cushman, M.; de Boer, I. H.; Folsom, A. R.; Psaty, B. M.; Koenig, W.; Meisinger, C.; Peters, A.; Verschuren, W. M M; Bueno-de-Mesquita, H. B.; Blokstra, A.; Rosengren, A.; Wilhelmsen, L.; Lappas, G.; Kuller, L. H.; Grandits, G.; Cooper, J. A.; Bauer, K. A.; Davidson, K. W.; Kirkland, S.; Shaffer, J. A.; Shimbo, D.; Kitamura, A.; Iso, H.; Sato, S.; Dullaart, R. P F; Bakker, S. J L; Gansevoort, R. T.; Ducimetiere, P.; Amouyel, P.; Arveiler, D.; Evans, A.; Ferrières, J.; Schulte, H.; Assmann, G.; Jukema, J. W.; Westendorp, R. G J; Sattar, N.; Cantin, B.; Lamarche, B.; Després, J. P.; Wingard, D. L.; Daniels, L. B.; Gudnason, V.; Aspelund, T.; Trevisan, M.; Hofman, A.; Franco, O. H.; Tunstall-Pedoe, H.; Tavendale, R.; Lowe, G. D O; Woodward, M.; Howard, W. J.; Howard, B. V.; Zhang, Y.; Best, L. G.; Umans, J.; Ben-Shlomo, Y.; Davey-Smith, G.; Onat, A.; Nakagawa, H.; Sakurai, M.; Nakamura, K.; Morikawa, Y.; Njølstad, I.; Mathiesen, E. B.; Wilsgaard, T.; Sundström, J.; Gaziano, J. M.; Ridker, P. M.; Marmot, M.; Clarke, R.; Collins, R.; Fletcher, A.; Brunner, E.; Shipley, M.; Kivimaki, M.; Ridker, P. M.; Buring, J.; Rifai, N.; Cook, N.; Ford, I.; Robertson, M.; Marín Ibañez, A.; Feskens, E. J M; Geleijnse, J. M.
2015-01-01
Motivation: Discrimination statistics describe the ability of a survival model to assign higher risks to individuals who experience earlier events: examples are Harrell's C-index and Royston and Sauerbrei's D, which we call the D-index. Prognostic covariates whose distributions are controlled by the
Globally covering a-priori regional gravity covariance models
D. Arabelos
2003-01-01
Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`
Genomic variance estimates: With or without disequilibrium covariances?
Lehermeier, C; de Los Campos, G; Wimmer, V; Schön, C-C
2017-06-01
Whole-genome regression methods are often used for estimating genomic heritability: the proportion of phenotypic variance that can be explained by regression on marker genotypes. Recently, there has been an intensive debate on whether and how to account for the contribution of linkage disequilibrium (LD) to genomic variance. Here, we investigate two different methods for genomic variance estimation that differ in their ability to account for LD. By analysing flowering time in a data set on 1,057 fully sequenced Arabidopsis lines with strong evidence for diversifying selection, we observed a large contribution of covariances between quantitative trait loci (QTL) to the genomic variance. The classical estimate of genomic variance that ignores covariances underestimated the genomic variance in the data. The second method accounts for LD explicitly and leads to genomic variance estimates that when added to error variance estimates match the sample variance of phenotypes. This method also allows estimating the covariance between sets of markers when partitioning the genome into subunits. Large covariance estimates between the five Arabidopsis chromosomes indicated that the population structure in the data led to strong LD also between physically unlinked QTL. By consecutively removing population structure from the phenotypic variance using principal component analysis, we show how population structure affects the magnitude of LD contribution and the genomic variance estimates obtained with the two methods. © 2017 Blackwell Verlag GmbH.
Efficient retrieval of landscape Hessian: forced optimal covariance adaptive learning.
Shir, Ofer M; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel
2014-06-01
Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳10^{4}). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes.
Covariate-adjusted measures of discrimination for survival data
White, Ian R.; Rapsomaniki, Eleni; Wannamethee, S. G.; Morris, R. W.; Willeit, J.; Willeit, P.; Santer, P.; Kiechl, S.; Wald, N.; Ebrahim, S.; Lawlor, D. A.; Gallacher, J.; Yarnell, J. W G; Ben-Shlomo, Y.; Casiglia, E.; Tikhonoff, V.; Sutherland, S. E.; Nietert, P. J.; Keil, J. E.; Bachman, D. L.; Psaty, B. M.; Cushman, M.; Nordestgaard, B. G.; Tybjærg-Hansen, A.; Frikke-Schmidt, R.; Giampaoli, S.; Palmieri, L.; Panico, S.; Pilotto, L.; Vanuzzo, D.; Simons, L. A.; Friedlander, Y.; McCallum, J.; Price, J. F.; McLachlan, S.; Taylor, J. O.; Guralnik, J. M.; Wallace, R. B.; Kohout, F. J.; Cornoni-Huntley, J. C.; Guralnik, J. M.; Blazer, D. G.; Guralnik, J. M.; Phillips, C. L.; Phillips, C. L.; Guralnik, J. M.; Wareham, N. J.; Khaw, K. T.; Brenner, H.; Schöttker, B.; Müller, H. T.; Rothenbacher, D.; Nissinen, A.; Donfrancesco, C.; Giampaoli, S.; Harald, K.; Jousilahti, P. R.; Vartiainen, E.; Salomaa, V.; D'Agostino, R. B.; Wolf, P. A.; Vasan, R. S.; Daimon, M.; Oizumi, T.; Kayama, T.; Kato, T.; Chetrit, A.; Dankner, R.; Lubin, F.; Welin, L.; Svärdsudd, K.; Eriksson, H.; Lappas, G.; Lissner, L.; Mehlig, K.; Björkelund, C.; Nagel, D.; Kiyohara, Y.; Arima, H.; Ninomiya, T.; Hata, J.; Rodriguez, B.; Dekker, J. M.; Nijpels, G.; Stehouwer, C. D A; Iso, H.; Kitamura, A.; Yamagishi, K.; Noda, H.; Goldbourt, U.; Kauhanen, J.; Salonen, J. T.; Tuomainen, T. P.; Meade, T. W.; DeStavola, B. L.; Blokstra, A.; Verschuren, W. M M; Cushman, M.; de Boer, I. H.; Folsom, A. R.; Psaty, B. M.; Koenig, W.; Meisinger, C.; Peters, A.; Verschuren, W. M M; Bueno-de-Mesquita, H. B.; Blokstra, A.; Rosengren, A.; Wilhelmsen, L.; Lappas, G.; Kuller, L. H.; Grandits, G.; Cooper, J. A.; Bauer, K. A.; Davidson, K. W.; Kirkland, S.; Shaffer, J. A.; Shimbo, D.; Kitamura, A.; Iso, H.; Sato, S.; Dullaart, R. P F; Bakker, S. J L; Gansevoort, R. T.; Ducimetiere, P.; Amouyel, P.; Arveiler, D.; Evans, A.; Ferrières, J.; Schulte, H.; Assmann, G.; Jukema, J. W.; Westendorp, R. G J; Sattar, N.; Cantin, B.; Lamarche, B.; Després, J. P.; Wingard, D. L.; Daniels, L. B.; Gudnason, V.; Aspelund, T.; Trevisan, M.; Hofman, A.; Franco, O. H.; Tunstall-Pedoe, H.; Tavendale, R.; Lowe, G. D O; Woodward, M.; Howard, W. J.; Howard, B. V.; Zhang, Y.; Best, L. G.; Umans, J.; Ben-Shlomo, Y.; Davey-Smith, G.; Onat, A.; Nakagawa, H.; Sakurai, M.; Nakamura, K.; Morikawa, Y.; Njølstad, I.; Mathiesen, E. B.; Wilsgaard, T.; Sundström, J.; Gaziano, J. M.; Ridker, P. M.; Marmot, M.; Clarke, R.; Collins, R.; Fletcher, A.; Brunner, E.; Shipley, M.; Kivimaki, M.; Ridker, P. M.; Buring, J.; Rifai, N.; Cook, N.; Ford, I.; Robertson, M.; Marín Ibañez, A.; Feskens, E. J M; Geleijnse, J. M.
2015-01-01
Motivation: Discrimination statistics describe the ability of a survival model to assign higher risks to individuals who experience earlier events: examples are Harrell's C-index and Royston and Sauerbrei's D, which we call the D-index. Prognostic covariates whose distributions are controlled by the
Covariance, correlation matrix, and the multiscale community structure of networks.
Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing
2010-07-01
Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.
High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods
Johnson, Christopher C; Ravikumar, Pradeep
2011-01-01
In this paper we consider the task of estimating the non-zero pattern of the sparse inverse covariance matrix of a zero-mean Gaussian random vector from a set of iid samples. Note that this is also equivalent to recovering the underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We present two novel greedy approaches to solving this problem. The first estimates the non-zero covariates of the overall inverse covariance matrix using a series of global forward and backward greedy steps. The second estimates the neighborhood of each node in the graph separately, again using greedy forward and backward steps, and combines the intermediate neighborhoods to form an overall estimate. The principal contribution of this paper is a rigorous analysis of the sparsistency, or consistency in recovering the sparsity pattern of the inverse covariance matrix. Surprisingly, we show that both the local and global greedy methods learn the full structure of the model with high probability given just $O(d\\log...
Covariance matrices for use in criticality safety predictability studies
Derrien, H.; Larson, N.M.; Leal, L.C.
1997-09-01
Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.
Eddy covariance based methane flux in Sundarbans mangroves, India
Chandra Shekhar Jha; Suraj Reddy Rodda; Kiran Chand Thumaty; A K Raha; V K Dadhwal
2014-07-01
We report the initial results of the methane flux measured using eddy covariance method during summer months from the world’s largest mangrove ecosystem, Sundarbans of India. Mangrove ecosystems are known sources for methane (CH4) having very high global warming potential. In order to quantify the methane flux in mangroves, an eddy covariance flux tower was recently erected in the largest unpolluted and undisturbed mangrove ecosystem in Sundarbans (India). The tower is equipped with eddy covariance flux tower instruments to continuously measure methane fluxes besides the mass and energy fluxes. This paper presents the preliminary results of methane flux variations during summer months (i.e., April and May 2012) in Sundarbans mangrove ecosystem. The mean concentrations of CH4 emission over the study period was 1682 ± 956 ppb. The measured CH4 fluxes computed from eddy covariance technique showed that the study area acts as a net source for CH4 with daily mean flux of 150.22 ± 248.87 mg m−2 day−1. The methane emission as well as its flux showed very high variability diurnally. Though the environmental conditions controlling methane emission is not yet fully understood, an attempt has been made in the present study to analyse the relationships of methane efflux with tidal activity. This present study is part of Indian Space Research Organisation–Geosphere Biosphere Program (ISRO–GBP) initiative under ‘National Carbon Project’.
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
A New Test for a Normal Covariance Matrix
禹建奇
2015-01-01
The problem of testing the normal covariance matrix equal to a specified matrix is considered.A new Chi-Square test statistic is derived for multivariate normal population.Unlike the likelihood ratio test,the new test is an exact one.
Covariation of Color and Luminance Facilitate Object Individuation in Infancy
Woods, Rebecca J.; Wilcox, Teresa
2010-01-01
The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…
Unified Approach to Universal Cloning and Phase-Covariant Cloning
Hu, Jia-Zhong; Yu, Zong-Wen; Wang, Xiang-Bin
2008-01-01
We analyze the problem of approximate quantum cloning when the quantum state is between two latitudes on the Bloch's sphere. We present an analytical formula for the optimized 1-to-2 cloning. The formula unifies the universal quantum cloning (UQCM) and the phase covariant quantum cloning.
Meyer Karin
2001-11-01
Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.
Assessing spatial covariance among time series of abundance.
Jorgensen, Jeffrey C; Ward, Eric J; Scheuerell, Mark D; Zabel, Richard W
2016-04-01
For species of conservation concern, an essential part of the recovery planning process is identifying discrete population units and their location with respect to one another. A common feature among geographically proximate populations is that the number of organisms tends to covary through time as a consequence of similar responses to exogenous influences. In turn, high covariation among populations can threaten the persistence of the larger metapopulation. Historically, explorations of the covariance in population size of species with many (>10) time series have been computationally difficult. Here, we illustrate how dynamic factor analysis (DFA) can be used to characterize diversity among time series of population abundances and the degree to which all populations can be represented by a few common signals. Our application focuses on anadromous Chinook salmon (Oncorhynchus tshawytscha), a species listed under the US Endangered Species Act, that is impacted by a variety of natural and anthropogenic factors. Specifically, we fit DFA models to 24 time series of population abundance and used model selection to identify the minimum number of latent variables that explained the most temporal variation after accounting for the effects of environmental covariates. We found support for grouping the time series according to 5 common latent variables. The top model included two covariates: the Pacific Decadal Oscillation in spring and summer. The assignment of populations to the latent variables matched the currently established population structure at a broad spatial scale. At a finer scale, there was more population grouping complexity. Some relatively distant populations were grouped together, and some relatively close populations - considered to be more aligned with each other - were more associated with populations further away. These coarse- and fine-grained examinations of spatial structure are important because they reveal different structural patterns not evident
Residual noise covariance for Planck low-resolution data analysis
Keskitalo, R.; Ashdown, M. A. J.; Cabella, P.; Kisner, T.; Poutanen, T.; Stompor, R.; Bartlett, J. G.; Borrill, J.; Cantalupo, C.; de Gasperis, G.; de Rosa, A.; de Troia, G.; Eriksen, H. K.; Finelli, F.; Górski, K. M.; Gruppuso, A.; Hivon, E.; Jaffe, A.; Keihänen, E.; Kurki-Suonio, H.; Lawrence, C. R.; Natoli, P.; Paci, F.; Polenta, G.; Rocha, G.
2010-11-01
Aims: We develop and validate tools for estimating residual noise covariance in Planck frequency maps, we also quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derived analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of mapmaking approaches. We tested these analytical predictions using both Monte Carlo simulations and by applying them to angular power spectrum estimation. We used simulations to quantify the level of signal errors incurred in the different resolution downgrading schemes considered in this work. Results: We find excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping mapmakers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. Signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the subpercent level at multipoles, ℓ > 2Nside, where Nside is the HEALPix resolution parameter. We show that, for a polarization measurement, reliable characterization of the residual noise is required to draw reliable constraints on large-scale anisotropy. Conclusions: Methods presented and tested in this paper allow for production of low-resolution maps with both controlled sky signal error level and a reliable estimate of covariance of the residual noise. We have also presented a method for smoothing the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth-limited maps.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Some asymptotic properties of kriging when the covariance function is misspecified
Stein, M.L.; Handcock, M.S.
1989-02-01
The impact of using an incorrect covariance function of kriging predictors is investigated. Results of Stein (1988) show that the impact on the kriging predictor from not using the correct covariance function is asymptotically negligible as the number of observations increases if the covariance function used is compatible with the actual covariance function on the region of interest R. The definition and some properties of compatibility of covariance functions are given. The compatibility of generalized covariances also is defined. Compatibility supports the intuitively sensible concept that usually only the behavior near the origin of the covariance function is critical for purposes of kriging. However, the commonly used spherical covariance function is an exception: observations at a distance near the range of a spherical covariance function can have a nonnegligible effect on kriging predictors for three-dimensional processes. Finally, a comparison is made with the perturbation approach of Diamond and Armstrong (1984) and some observations of Warnes (1986) are clarified.
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Meyer, Karin
2007-11-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).
Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi
2015-07-01
Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.
Construction and use of gene expression covariation matrix
Bellis Michel
2009-07-01
Full Text Available Abstract Background One essential step in the massive analysis of transcriptomic profiles is the calculation of the correlation coefficient, a value used to select pairs of genes with similar or inverse transcriptional profiles across a large fraction of the biological conditions examined. Until now, the choice between the two available methods for calculating the coefficient has been dictated mainly by technological considerations. Specifically, in analyses based on double-channel techniques, researchers have been required to use covariation correlation, i.e. the correlation between gene expression changes measured between several pairs of biological conditions, expressed for example as fold-change. In contrast, in analyses of single-channel techniques scientists have been restricted to the use of coexpression correlation, i.e. correlation between gene expression levels. To our knowledge, nobody has ever examined the possible benefits of using covariation instead of coexpression in massive analyses of single channel microarray results. Results We describe here how single-channel techniques can be treated like double-channel techniques and used to generate both gene expression changes and covariation measures. We also present a new method that allows the calculation of both positive and negative correlation coefficients between genes. First, we perform systematic comparisons between two given biological conditions and classify, for each comparison, genes as increased (I, decreased (D, or not changed (N. As a result, the original series of n gene expression level measures assigned to each gene is replaced by an ordered string of n(n-1/2 symbols, e.g. IDDNNIDID....DNNNNNNID, with the length of the string corresponding to the number of comparisons. In a second step, positive and negative covariation matrices (CVM are constructed by calculating statistically significant positive or negative correlation scores for any pair of genes by comparing their
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Multigroup covariance matrices for fast-reactor studies
Smith, J.D. III; Broadhead, B.L.
1981-04-01
This report presents the multigroup covariance matrices based on the ENDF/B-V nuclear data evaluations. The materials and reactions have been chosen according to the specifications of ORNL-5517. Several cross section covariances, other than those specified by that report, are included due to the derived nature of the uncertainty files in ENDF/B-V. The materials represented are Ni, Cr, /sup 16/O, /sup 12/C, Fe, Na, /sup 235/U, /sup 238/U, /sup 239/Pu, /sup 240/Pu, /sup 241/Pu, and /sup 10/B (present due to its correlation to /sup 238/U). The data have been originally processed into a 52-group energy structure by PUFF-II and subsequently collapsed to smaller subgroup strutures. The results are illustrated in 52-group correlation matrix plots and tabulated into thirteen groups for convenience.
Transformation rule for covariance matrices under Bell-like detections
Spedalieri, Gaetana; Pirandola, Stefano
2013-01-01
Starting from the transformation rule of a covariance matrix under homodyne detections, we can easily derive a formula for the transformation of a covariance matrix of (n+2) bosonic modes under Bell-like detections, where the last two modes are combined in an arbitrary beam splitter (i.e., with arbitrary transmissivity) and then homodyned. This formula can be specialized to describe the standard Bell detection and the heterodyne measurement, which are exploited in many contexts, including protocols of quantum teleportation, entanglement swapping and quantum cryptography. Our general formula can be adopted to study these protocols in the presence of experimental imperfections or asymmetric setups, e.g., deriving from the use of unbalanced beam splitters.
Noise Covariance Properties in Dual-Tree Wavelet Decompositions
Chaux, Caroline; Duval, Laurent; 10.1109/TIT.2007.909104
2011-01-01
Dual-tree wavelet decompositions have recently gained much popularity, mainly due to their ability to provide an accurate directional analysis of images combined with a reduced redundancy. When the decomposition of a random process is performed -- which occurs in particular when an additive noise is corrupting the signal to be analyzed -- it is useful to characterize the statistical properties of the dual-tree wavelet coefficients of this process. As dual-tree decompositions constitute overcomplete frame expansions, correlation structures are introduced among the coefficients, even when a white noise is analyzed. In this paper, we show that it is possible to provide an accurate description of the covariance properties of the dual-tree coefficients of a wide-sense stationary process. The expressions of the (cross-)covariance sequences of the coefficients are derived in the one and two-dimensional cases. Asymptotic results are also provided, allowing to predict the behaviour of the second-order moments for larg...
Model Order Selection Rules for Covariance Structure Classification in Radar
Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre
2017-10-01
The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.
Global recoverable reserve estimation by covariance matching constrained kriging
Tercan, A.E. [Hacettepe University, Ankara (Turkey). Dept. of Mining Engineering
2004-10-01
A central problem in mining practice is estimation of global recoverable reserves, i.e., recovered tonnage and mean quality varying with cut-off value over the whole deposit. This article describes the application of covariance matching constrained kriging to the estimation of the global recoverable reserves in a lignite deposit in Turkey. Thickness and calorific value are the variables used in this study. The deposit is divided into 180 panels with 200 m x 200 m size and the mean calorific value of the panels is estimated by covariance matching constrained kriging. Quality tonnage curve is constructed based on the estimated mean values. For comparison, quality tonnage curve from ordinary kriging is also provided.
The Shape of Covariantly Smeared Sources in Lattice QCD
von Hippel, Georg M; Rae, Thomas D; Wittig, Hartmut
2013-01-01
Covariantly smeared sources are commonly used in lattice QCD to enhance the projection onto the ground state. Here we investigate the dependence of their shape on the gauge field background and find that the presence of localized concentrations of magnetic field can lead to strong distortions which reduce the smearing radii achievable by iterative smearing prescriptions. In particular, as $a\\to 0$, iterative procedures like Jacobi smearing require increasingly large iteration counts in order to reach physically-sized smearing radii $r_{sm}\\sim$ 0.5 fm, and the resulting sources are strongly distorted. To bypass this issue, we propose a covariant smearing procedure (``free-form smearing'') that allows us to create arbitrarily shaped sources, including in particular Gaussians of arbitrary radius.
Covariance in models of loop quantum gravity: Spherical symmetry
Bojowald, Martin; Reyes, Juan D
2015-01-01
Spherically symmetric models of loop quantum gravity have been studied recently by different methods that aim to deal with structure functions in the usual constraint algebra of gravitational systems. As noticed by Gambini and Pullin, a linear redefinition of the constraints (with phase-space dependent coefficients) can be used to eliminate structure functions, even Abelianizing the more-difficult part of the constraint algebra. The Abelianized constraints can then easily be quantized or modified by putative quantum effects. As pointed out here, however, the method does not automatically provide a covariant quantization, defined as an anomaly-free quantum theory with a classical limit in which the usual (off-shell) gauge structure of hypersurface deformations in space-time appears. The holonomy-modified vacuum theory based on Abelianization is covariant in this sense, but matter theories with local degrees of freedom are not. Detailed demonstrations of these statements show complete agreement with results of ...
Quantum corrections for the cubic Galileon in the covariant language
Saltas, Ippocratis D.; Vitagliano, Vincenzo
2017-05-01
We present for the first time an explicit exposition of quantum corrections within the cubic Galileon theory including the effect of quantum gravity, in a background- and gauge-invariant manner, employing the field-reparametrisation approach of the covariant effective action at 1-loop. We show that the consideration of gravitational effects in combination with the non-linear derivative structure of the theory reveals new interactions at the perturbative level, which manifest themselves as higher-operators in the associated effective action, which' relevance is controlled by appropriate ratios of the cosmological vacuum and the Galileon mass scale. The significance and concept of the covariant approach in this context is discussed, while all calculations are explicitly presented.
Estimating surface fluxes using eddy covariance and numerical ogive optimization
Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling
2015-01-01
Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low......-frequency contributions interfere with our ability to isolate local biogeochemical processes of interest, as represented by turbulent fluxes. No method currently exists to disentangle low-frequency contributions on flux estimates. Here, we present a novel comprehensive numerical scheme to identify and separate out low...
Batalin-Vilkovisky formalism in locally covariant field theory
Rejzner, Katarzyna
2011-01-01
The present work contains a complete formulation of the Batalin-Vilkovisky (BV) formalism in the framework of locally covariant field theory. In the first part of the thesis the classical theory is investigated with a particular focus on the infinite dimensional character of the underlying structures. It is shown that the use of infinite dimensional differential geometry allows for a conceptually clear and elegant formulation. The construction of the BV complex is performed in a fully covariant way and we also generalize the BV framework to a more abstract level, using functors and natural transformations. In this setting we construct the BV complex for classical gravity. This allows us to give a homological interpretation to the notion of diffeomorphism invariant physical quantities in general relativity. The second part of the thesis concerns the quantum theory. We provide a framework for the BV quantization that doesn't rely on the path integral formalism, but is completely formulated within perturbative a...
Deformed Covariant Quantum Phase Spaces as Hopf Algebroids
Lukierski, Jerzy
2015-01-01
We consider the general D=4 (10+10)-dimensional kappa-deformed quantum phase space as given by Heisenberg double \\mathcal{H} of D=4 kappa-deformed Poincare-Hopf algebra H. The standard (4+4) -dimensional kappa - deformed covariant quantum phase space spanned by kappa - deformed Minkowski coordinates and commuting momenta generators ({x}_{\\mu },{p}_{\\mu }) is obtained as the subalgebra of \\mathcal{H}. We study further the property that Heisenberg double defines particular quantum spaces with Hopf algebroid structure. We calculate by using purely algebraic methods the explicite Hopf algebroid structure of standard kappa - deformed quantum covariant phase space in Majid-Ruegg bicrossproduct basis. The coproducts for Hopf algebroids are not unique, determined modulo the coproduct gauge freedom. Finally we consider the interpretation of the algebraic description of quantum phase spaces as Hopf bialgebroids.
The Photon Wavefunction: a covariant formulation and equivalence with QED
Tamburini, Fabrizio; Vicino, Denise
2008-01-01
We discuss the limits of the photon wavefunction (PWF) formalism, which is experiencing a revival in these days from the new practical applications in photonics and quantum optics. We build a Dirac-like equation for the PWF written in a manifestly covariant form and show that, in presence of charged matter fields, it reproduces the standard formulation of (classical) Electrodinamics. This shows the inconsistency of the attempts to construct a quantum theory of interacting photons, based on th...
Threshold regression for survival data with time-varying covariates.
Lee, Mei-Ling Ting; Whitmore, G A; Rosner, Bernard A
2010-03-30
Time-to-event data with time-varying covariates pose an interesting challenge for statistical modeling and inference, especially where the data require a regression structure but are not consistent with the proportional hazard assumption. Threshold regression (TR) is a relatively new methodology based on the concept that degradation or deterioration of a subject's health follows a stochastic process and failure occurs when the process first reaches a failure state or threshold (a first-hitting-time). Survival data with time-varying covariates consist of sequential observations on the level of degradation and/or on covariates of the subject, prior to the occurrence of the failure event. Encounters with this type of data structure abound in practical settings for survival analysis and there is a pressing need for simple regression methods to handle the longitudinal aspect of the data. Using a Markov property to decompose a longitudinal record into a series of single records is one strategy for dealing with this type of data. This study looks at the theoretical conditions for which this Markov approach is valid. The approach is called threshold regression with Markov decomposition or Markov TR for short. A number of important special cases, such as data with unevenly spaced time points and competing risks as stopping modes, are discussed. We show that a proportional hazards regression model with time-varying covariates is consistent with the Markov TR model. The Markov TR procedure is illustrated by a case application to a study of lung cancer risk. The procedure is also shown to be consistent with the use of an alternative time scale. Finally, we present the connection of the procedure to the concept of a collapsible survival model.
Testing the Equality of Covariance Operators in Functional Samples
Fremdt, Stefan; Kokoszka, Piotr; Steinebach, Josef G
2011-01-01
We propose a robust test for the equality of the covariance structures in two functional samples. The test statistic has a chi-square asymptotic distribution with a known number of degrees of freedom, which depends on the level of dimension reduction needed to represent the data. Detailed analysis of the asymptotic properties is developed. Finite sample performance is examined by a simulation study and an application to egg-laying curves of fruit flies.
Spin Structure Functions in a Covariant Spectator Quark Model
G. Ramalho, Franz Gross and M. T. Peña
2010-12-01
We apply the covariant spectator quark–diquark model, already probed in the description of the nucleon elastic form factors, to the calculation of the deep inelastic scattering (DIS) spin-independent and spin-dependent structure functions of the nucleon. The nucleon wave function is given by a combination of quark–diquark orbital states, corresponding to S, D and P-waves. A simple form for the quark distribution function associated to the P and D waves is tested.
Treatment of Nuclear Data Covariance Information in Sample Generation.
Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William (ORNL)
2017-10-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Hydrodynamic Covariant Symplectic Structure from Bilinear Hamiltonian Functions
Capozziello S.
2005-07-01
Full Text Available Starting from generic bilinear Hamiltonians, constructed by covariant vector, bivector or tensor fields, it is possible to derive a general symplectic structure which leads to holonomic and anholonomic formulations of Hamilton equations of motion directly related to a hydrodynamic picture. This feature is gauge free and it seems a deep link common to all interactions, electromagnetism and gravity included. This scheme could lead toward a full canonical quantization.
Covariant GNS Representation for C*-Dynamical Systems
Pandiscia, Carlo
2012-01-01
We extend the covariant GNS representation of Niculescu, Str\\"oh and Zsid\\'o for C*-dynamical systems with time-evolution of the system (dynamics) a homomorphism of C*-algebras, to any dynamical systems, where the dynamics is an unital completely positive map. We give also an overview on its application to the reversible dilation theory as formulated by B. Kummerer.
Do Time-Varying Covariances, Volatility Comovement and Spillover Matter?
Lakshmi Balasubramanyan
2005-01-01
Financial markets and their respective assets are so intertwined; analyzing any single market in isolation ignores important information. We investigate whether time varying volatility comovement and spillover impact the true variance-covariance matrix under a time-varying correlation set up. Statistically significant volatility spillover and comovement between US, UK and Japan is found. To demonstrate the importance of modelling volatility comovement and spillover, we look at a simple portfo...
Flavour covariant transport equations: An application to resonant leptogenesis
P.S. Bhupal Dev
2014-09-01
Full Text Available We present a fully flavour-covariant formalism for transport phenomena, by deriving Markovian master equations that describe the time-evolution of particle number densities in a statistical ensemble with arbitrary flavour content. As an application of this general formalism, we study flavour effects in a scenario of resonant leptogenesis (RL and obtain the flavour-covariant evolution equations for heavy-neutrino and lepton number densities. This provides a complete and unified description of RL, capturing three distinct physical phenomena: (i the resonant mixing between the heavy-neutrino states, (ii coherent oscillations between different heavy-neutrino flavours, and (iii quantum decoherence effects in the charged-lepton sector. To illustrate the importance of this formalism, we numerically solve the flavour-covariant rate equations for a minimal RL model and show that the total lepton asymmetry can be enhanced by up to one order of magnitude, as compared to that obtained from flavour-diagonal or partially flavour off-diagonal rate equations. Thus, the viable RL model parameter space is enlarged, thereby enhancing further the prospects of probing a common origin of neutrino masses and the baryon asymmetry in the Universe at the LHC, as well as in low-energy experiments searching for lepton flavour and number violation. The key new ingredients in our flavour-covariant formalism are rank-4 rate tensors, which are required for the consistency of our flavour-mixing treatment, as shown by an explicit calculation of the relevant transition amplitudes by generalizing the optical theorem. We also provide a geometric and physical interpretation of the heavy-neutrino degeneracy limits in the minimal RL scenario. Finally, we comment on the consistency of various suggested forms for the heavy-neutrino self-energy regulator in the lepton-number conserving limit.
Covariant Quantization of the Brink-Schwarz Superparticle
Grassi, P A; Porrati, Massimo
2001-01-01
The quantization of the Brink-Schwarz-Casalbuoni superparticle is performed in an explicitly covariant way using the antibracket formalism. Since an infinite number of ghost fields are required, within a suitable off-shell twistor-like formalism, we are able to fix the gauge of each ghost sector without modifying the physical content of the theory. The computation reveals that the antibracket cohomology contains only the physical degrees of freedom.
Collective Flow of A Hyperons within Covariant Kaon Dynamics
XING Yong-Zhong; ZHU Yu-Lan; WANG Yan-Yan; ZHENG Yu-Ming
2011-01-01
@@ The collective flow of ∧ hyperons produced in association with positively charged kaon mesons in nuclear reactions at SIS energies is studied using the quantum molecular dynamics(QMD)model within covariant kaon dynamics Our calculation indicates that both the directed and differential directed flows of ∧s are almost in agreement with the experimental data.This suggest that the covariant kaon dynamics based on the chiral mean field approximation can not only explain the collective flow of kaon mesons,but also give reasonable results for the collective flow of ∧ hyperons at SIS energies.The final-state interaction of ∧ hyperons with dense nuclear matter enhances their directed flow and improves the agreement of their differential directed flow with the experimental data.The influence of the interaction on the ∧ collective flow is more appreciable at large rapidity or transverse momentum region.%The collective How of A hyperons produced in association with positively charged kaon mesons in nuclear reactions at SIS energies is studied using the quantum molecular dynamics (QMD) model within covariant kaon dynamics. Our calculation indicates that both the directed and differential directed Sows of As are almost in agreement with the experimental data. This suggest that the covariant kaon dynamics based on the chiral mean Geld approximation can not only explain the collective flow of kaon mesons, but also give reasonable results for the collective How of A hyperons at SIS energies. The Hnal-state interaction of A hyperons with dense nuclear matter enhances their directed How and improves the agreement of their differential directed How with the experimental data. The influence of the interaction on the A collective How is more appreciable at iarge rapidity or transverse momentum region.
Covariance in models of loop quantum gravity: Gowdy systems
Bojowald, Martin
2015-01-01
Recent results in the construction of anomaly-free models of loop quantum gravity have shown obstacles when local physical degrees of freedom are present. Here, a set of no-go properties is derived in polarized Gowdy models, raising the question whether these systems can be covariant beyond a background treatment. As a side product, it is shown that normal deformations in classical polarized Gowdy models can be Abelianized.
Dunkl Operators as Covariant Derivatives in a Quantum Principal Bundle
Micho Đurđevich; Stephen Bruce Sontz
2011-01-01
A quantum principal bundle is constructed for every Coxeter group acting on a finite-dimensional Euclidean space $E$, and then a connection is also defined on this bundle. The covariant derivatives associated to this connection are the Dunkl operators, originally introduced as part of a program to generalize harmonic analysis in Euclidean spaces. This gives us a new, geometric way of viewing the Dunkl operators. In particular, we present a new proof of the commutativity of these operators amo...
Lillian Pascoa
2013-06-01
Full Text Available We used actual and adjusted weights to 120 d and 210 d of age of 72,731 male and female Nellore calves born in 40 PMGRN - Nellore Brazil herds from 1985 to 2005 aiming to compare the effect of different definitions of contemporary groups on estimates of (covariance and genetic parameters. Four models, each one with a different structure of contemporary group (CG, were compared using the Akaike Information Criterion (AIC, the Bayesian Information Criterion (BIC, and the Consistent Akaike Information Criterion (CAIC. (Covariance estimates were obtained using a derivative-free restricted maximum likelihood procedure. Estimates of (covariances and genetic parameters were similar for the four models considered. However, the BIC and CAIC indicated that the most appropriate model for this Nellore population was the one that considered CG to be random, and sex of calf to be fixed and separate from CG, in which CG was defined as the group of calves born in the same herd, year, season of birth (trimester, and undergone the same management.
Bilinear covariants and spinor fields duality in quantum Clifford algebras
Abłamowicz, Rafał, E-mail: rablamowicz@tntech.edu [Department of Mathematics, Box 5054, Tennessee Technological University, Cookeville, Tennessee 38505 (United States); Gonçalves, Icaro, E-mail: icaro.goncalves@ufabc.edu.br [Instituto de Matemática e Estatística, Universidade de São Paulo, Rua do Matão, 1010, 05508-090, São Paulo, SP (Brazil); Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, 09210-170 Santo André, SP (Brazil); Rocha, Roldão da, E-mail: roldao.rocha@ufabc.edu.br [Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, 09210-170 Santo André, SP (Brazil); International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste (Italy)
2014-10-15
Classification of quantum spinor fields according to quantum bilinear covariants is introduced in a context of quantum Clifford algebras on Minkowski spacetime. Once the bilinear covariants are expressed in terms of algebraic spinor fields, the duality between spinor and quantum spinor fields can be discussed. Thus, by endowing the underlying spacetime with an arbitrary bilinear form with an antisymmetric part in addition to a symmetric spacetime metric, quantum algebraic spinor fields and deformed bilinear covariants can be constructed. They are thus compared to the classical (non quantum) ones. Classes of quantum spinor fields classes are introduced and compared with Lounesto's spinor field classification. A physical interpretation of the deformed parts and the underlying Z-grading is proposed. The existence of an arbitrary bilinear form endowing the spacetime already has been explored in the literature in the context of quantum gravity [S. W. Hawking, “The unpredictability of quantum gravity,” Commun. Math. Phys. 87, 395 (1982)]. Here, it is shown further to play a prominent role in the structure of Dirac, Weyl, and Majorana spinor fields, besides the most general flagpoles and flag-dipoles. We introduce a new duality between the standard and the quantum spinor fields, by showing that when Clifford algebras over vector spaces endowed with an arbitrary bilinear form are taken into account, a mixture among the classes does occur. Consequently, novel features regarding the spinor fields can be derived.
Spatial implications of covariate adjustment on patterns of risk
Sabel, Clive Eric; Wilson, Jeff Gaines; Kingham, Simon
2007-01-01
Epidemiological studies that examine the relationship between environmental exposures and health often address other determinants of health that may influence the relationship being studied by adjusting for these factors as covariates. While disease surveillance methods routinely control for cova......Epidemiological studies that examine the relationship between environmental exposures and health often address other determinants of health that may influence the relationship being studied by adjusting for these factors as covariates. While disease surveillance methods routinely control......), then for a deprivation index, and finally for both PM10 and deprivation. Spatial patterns of risk, disease clusters and cold and hot spots were generated using a spatial scan statistic and a Getis-Ord Gi* statistic. In all disease groups tested (except the control disease), adjustment for chronic PM10 exposure...... area to a mixed residential/industrial area, possibly introducing new environmental exposures. Researchers should be aware of the potential spatial effects inherent in adjusting for covariates when considering study design and interpreting results. © 2007 Elsevier Ltd. All rights reserved....
Full covariance of CMB and lensing reconstruction power spectra
Peloton, Julien; Schmittfull, Marcel; Lewis, Antony; Carron, Julien; Zahn, Oliver
2017-02-01
CMB and lensing reconstruction power spectra are powerful probes of cosmology. However, they are correlated, since the CMB power spectra are lensed, and the lensing reconstruction is constructed using CMB multipoles. We perform a full analysis of the auto- and cross-covariances, including polarization power spectra and minimum-variance lensing estimators, and compare with simulations of idealized future CMB-S4 observations. Covariances sourced by fluctuations in the unlensed CMB and instrumental noise can largely be removed by using a realization-dependent subtraction of lensing reconstruction noise, leaving a relatively simple covariance model that is dominated by lensing-induced terms and well described by a small number of principal components. The correlations between the CMB and lensing power spectra will be detectable at the level of ˜5 σ for a CMB-S4 mission, and neglecting them could underestimate some parameter error bars by several tens of percent. However, we found that the inclusion of external priors or data sets to estimate parameter error bars can make the impact of the correlations almost negligible.
Resting-state brain organization revealed by functional covariance networks.
Zhiqiang Zhang
Full Text Available BACKGROUND: Brain network studies using techniques of intrinsic connectivity network based on fMRI time series (TS-ICN and structural covariance network (SCN have mapped out functional and structural organization of human brain at respective time scales. However, there lacks a meso-time-scale network to bridge the ICN and SCN and get insights of brain functional organization. METHODOLOGY AND PRINCIPAL FINDINGS: We proposed a functional covariance network (FCN method by measuring the covariance of amplitude of low-frequency fluctuations (ALFF in BOLD signals across subjects, and compared the patterns of ALFF-FCNs with the TS-ICNs and SCNs by mapping the brain networks of default network, task-positive network and sensory networks. We demonstrated large overlap among FCNs, ICNs and SCNs and modular nature in FCNs and ICNs by using conjunctional analysis. Most interestingly, FCN analysis showed a network dichotomy consisting of anti-correlated high-level cognitive system and low-level perceptive system, which is a novel finding different from the ICN dichotomy consisting of the default-mode network and the task-positive network. CONCLUSION: The current study proposed an ALFF-FCN approach to measure the interregional correlation of brain activity responding to short periods of state, and revealed novel organization patterns of resting-state brain activity from an intermediate time scale.
Fractal Video Coding Using Fast Normalized Covariance Based Similarity Measure
Ravindra E. Chaudhari
2016-01-01
Full Text Available Fast normalized covariance based similarity measure for fractal video compression with quadtree partitioning is proposed in this paper. To increase the speed of fractal encoding, a simplified expression of covariance between range and overlapped domain blocks within a search window is implemented in frequency domain. All the covariance coefficients are normalized by using standard deviation of overlapped domain blocks and these are efficiently calculated in one computation by using two different approaches, namely, FFT based and sum table based. Results of these two approaches are compared and they are almost equal to each other in all aspects, except the memory requirement. Based on proposed simplified similarity measure, gray level transformation parameters are computationally modified and isometry transformations are performed using rotation/reflection properties of IFFT. Quadtree decompositions are used for the partitions of larger size of range block, that is, 16 × 16, which is based on target level of motion compensated prediction error. Experimental result shows that proposed method can increase the encoding speed and compression ratio by 66.49% and 9.58%, respectively, as compared to NHEXS method with increase in PSNR by 0.41 dB. Compared to H.264, proposed method can save 20% of compression time with marginal variation in PSNR and compression ratio.
A Covariant model for the nucleon and the $\\Delta$
Ramalho, G; Gross, Franz
2008-01-01
The covariant spectator formalism is used to model the nucleon and the $\\Delta$(1232) as a system of three constituent quarks with their own electromagnetic structure. The definition of the ``fixed-axis'' polarization states for the diquark emitted from the initial state vertex and absorbed into the final state vertex is discussed. The helicity sum over those states is evaluated and seen to be covariant. Using this approach, all four electromagnetic form factors of the nucleon, together with the {\\it magnetic} form factor, $G_M^*$, for the $\\gamma N \\to \\Delta$ transition, can be described using manifestly covariant nucleon and $\\Delta$ wave functions with {\\it zero} orbital angular momentum $L$, but a successful description of $G_M^*$ near $Q^2=0$ requires the addition of a pion cloud term not included in the class of valence quark models considered here. We also show that the pure $S$-wave model gives electric, $G_E^*$, and coulomb, $G^*_C$, transition form factors that are identically zero, showing that th...
Covariant density functional theory: Reexamining the structure of superheavy nuclei
Agbemava, S E; Nakatsukasa, T; Ring, P
2015-01-01
A systematic investigation of even-even superheavy elements in the region of proton numbers $100 \\leq Z \\leq 130$ and in the region of neutron numbers from the proton-drip line up to neutron number $N=196$ is presented. For this study we use five most up-to-date covariant energy density functionals of different types, with a non-linear meson coupling, with density dependent meson couplings, and with density-dependent zero-range interactions. Pairing correlations are treated within relativistic Hartree-Bogoliubov (RHB) theory based on an effective separable particle-particle interaction of finite range and deformation effects are taken into account. This allows us to assess the spread of theoretical predictions within the present covariant models for the binding energies, deformation parameters, shell structures and $\\alpha$-decay half-lives. Contrary to the previous studies in covariant density functional theory, it was found that the impact of $N=172$ spherical shell gap on the structure of superheavy elemen...
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Allometric covariation: a hallmark behavior of plants and leaves.
Price, Charles A; Weitz, Joshua S
2012-03-01
Size is one of the most important axes of variation among plants. As such, plant biologists have long searched for unifying principles that can explain how matter and energy flux and organ partitioning scale with plant size. Several recent models have proposed a universal biophysical basis for numerous scaling phenomena in plants based on vascular network geometry. Here, we review statistical analyses of several large-scale plant datasets that demonstrate that a true hallmark of plant form variability is systematic covariation among traits. This covariation is constrained by allometries that combine and trade off with one another, rather than any single universal allometric scaling exponent for a trait or suite of traits. Further, we show that covariation can be successfully modeled using network approaches that allow for species-specific designs in plants and geometric approaches that constrain relationships among economic traits in leaves. Finally, we report large-scale efforts utilizing semi-automated software tools that quantify physical networks and can inform our attempts to link vascular network structure to plant form and function. Collectively, this work highlights how the linking of morphology, biomass partitioning and the structure of physical distribution networks can improve our empirical and theoretical understanding of important drivers of plant functional diversity.
MIMO Radar Transmit Beampattern Design Without Synthesising the Covariance Matrix
Ahmed, Sajid
2013-10-28
Compared to phased-array, multiple-input multiple-output (MIMO) radars provide more degrees-offreedom (DOF) that can be exploited for improved spatial resolution, better parametric identifiability, lower side-lobe levels at the transmitter/receiver, and design variety of transmit beampatterns. The design of the transmit beampattern generally requires the waveforms to have arbitrary auto- and crosscorrelation properties. The generation of such waveforms is a two step complicated process. In the first step a waveform covariance matrix is synthesised, which is a constrained optimisation problem. In the second step, to realise this covariance matrix actual waveforms are designed, which is also a constrained optimisation problem. Our proposed scheme converts this two step constrained optimisation problem into a one step unconstrained optimisation problem. In the proposed scheme, in contrast to synthesising the covariance matrix for the desired beampattern, nT independent finite-alphabet constantenvelope waveforms are generated and pre-processed, with weight matrix W, before transmitting from the antennas. In this work, two weight matrices are proposed that can be easily optimised for the desired symmetric and non-symmetric beampatterns and guarantee equal average power transmission from each antenna. Simulation results validate our claims.
D. M. D. Hendriks
2007-08-01
Full Text Available A DLT-100 Fast Methane Analyser (FMA from Los Gatos Research (LGR Ltd. is assessed for its applicability in a closed path eddy covariance field set-up. The FMA uses off-axis integrated cavity output spectroscopy (ICOS combined with a highly specific narrow band laser for the detection of CH_{4} and strongly reflective mirrors to obtain a laser path length of 2×10³ to 20×10³ m. Statistical testing, a calibration experiment and comparison with high tower data showed high precision and very good stability of the instrument. The measurement cell response time was tested to be 0.10 s. In the field set-up, the FMA is attached to a scroll pump and combined with a Gill Windmaster Pro 3 axis Ultrasonic Anemometer and a Licor 7500 open path infrared gas analyzer. The power-spectra and co-spectra of the instrument are satisfactory for 10 Hz sampling rates. The correspondence with CH_{4} flux chamber measurements is good and the observed CH_{4} emissions are comparable with (eddy covariance CH_{4} measurements in other peat areas.
CH_{4} emissions are rather variable over time and show a diurnal pattern. The average CH_{4} emission is 50±12.5 nmol m^{−2} s^{−1}, while the typical maximum CH_{4} emission is 120±30 nmol m^{−2} s^{−1} (during daytime and the typical minimum flux is –20±2.5 nmol m^{−2} s^{−1} (uptake, during night time.
Additionally, the set-up was tested for three measurement techniques with slower measurement rates, which could be used in the future to make the scroll pump superfluous and save energy. Both disjunct eddy covariance as well as slow 1 Hz eddy covariance showed results very similar to normal 10 Hz eddy covariance. Relaxed eddy accumulation (REA only matched with normal 10 Hz eddy covariance over an averaging period of at least several weeks.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Covariate selection in multivariate spatial analysis of ovine parasitic infection.
Musella, V; Catelan, D; Rinaldi, L; Lagazio, C; Cringoli, G; Biggeri, A
2011-05-01
Gastrointestinal (GI) strongyle and fluke infections remain one of the main constraints on health and productivity in sheep dairy production. A cross-sectional survey was conducted in 2004-2005 on ovine farms in the Campania region of southern Italy in order to evaluate the prevalence of Haemonchus contortus, Fasciola hepatica, Dicrocoelium dendriticum and Calicophoron daubneyi from among other parasitic infections. In the present work, we focused on the role of the ecological characteristics of the pasture environment while accounting for the underlying long range geographical risk pattern. Bayesian multivariate spatial statistical analysis was used. A systematic grid (10 km×10 km) sampling approach was used. Laboratory procedures were based on the FLOTAC technique to detect and count eggs of helminths. A Geographical Information System (GIS) was constructed by using environmental data layers. Data on each of these layers were then extracted for pasturing areas that were previously digitalized aerial images of the ovine farms. Bayesian multivariate statistical analyses, including improper multivariate conditional autoregressive models, were used to select covariates on a multivariate spatially structured risk surface. Out of the 121 tested farms, 109 were positive for H. contortus, 81 for D. dendriticum, 17 for C. daubneyi and 15 for F. hepatica. The statistical analysis highlighted a north-south long range spatially structured pattern. This geographical pattern is treated here as a confounder, because the main interest was in the causal role of ecological covariates at the level of each pasturing area. A high percentage of pasture and impermeable soil were strong predictors of F. hepatica risk and a high percentage of wood was a strong predictor of C. daubneyi. A high percentage of wood, rocks and arable soil with sparse trees explained the spatial distribution of D. dendriticum. Sparse vegetation, river, mixed soil and permeable soil explained the spatial
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
2015-03-01
ALGORITHM—EIGENVALUE ESTIMATION OF HYPERSPECTRAL WISHART COVARIANCE MATRICES FROM A LIMITED NUMBER OF SAMPLES ECBC-TN-067 Avishai Ben-David...Estimation of Hyperspectral Wishart Covariance Matrices from a Limited Number of Samples 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...covariance matrices and to recompute a revised covariance matrix from the eigenvalues. The MATLAB function is an implementation of the procedure developed
Large-scale portfolios using realized covariance matrix: evidence from the Japanese stock market
Masato Ubukata
2009-01-01
The objective of this paper is to examine effects of realized covariance matrix estimators based on intraday returns on large-scale minimum-variance equity portfolio optimization. We empirically assess out-of-sample performance of portfolios with different covariance matrix estimators: the realized covariance matrix estimators and Bayesian shrinkage estimators based on the past monthly and daily returns. The main results are: (1) the realized covariance matrix estimators using the past intrad...
B. Langford
2015-03-01
Full Text Available All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time. We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with
Conditioning of the stationary kriging matrices for some well-known covariance models
Posa, D. (IRMA-CNR, Bari (Italy))
1989-10-01
In this paper, the condition number of the stationary kriging matrix is studied for some well-known covariance models. Indeed, the robustness of the kriging weights is strongly affected by this measure. Such an analysis can justify the choice of a covariance function among other admissible models which could fit a given experimental covariance equally well.
Lorentz covariant field theory on noncommutative spacetime based on DFR algebra
Okumura, Y
2003-01-01
Lorentz covariance is the fundamental principle of every relativistic field theory which insures consistent physical descriptions. Even if the space-time is noncommutative, field theories on it should keep Lorentz covariance. In this letter, it is shown that the field theory on noncommutative spacetime is Lorentz covariant if the noncommutativity emerges from the algebra of spacetime operators described by Doplicher, Fredenhagen and Roberts.
Improving on the empirical covariance matrix using truncated PCA with white noise residuals
Jewson, S
2005-01-01
The empirical covariance matrix is not necessarily the best estimator for the population covariance matrix: we describe a simple method which gives better estimates in two examples. The method models the covariance matrix using truncated PCA with white noise residuals. Jack-knife cross-validation is used to find the truncation that maximises the out-of-sample likelihood score.
Klacka, J
2001-01-01
Relativistically covariant form of equation of motion for real particle (body) under the action of electromagnetic radiation is derived. Equation of motion in the proper frame of the particle uses the radiation pressure cross section 3 $\\times$ 3 matrix. Obtained covariant equation of motion is compared with another covariant equation of motion which was presented more than one year ago.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Zhao, W.; Cella, M.; Pasqua, O. Della; Burger, D.M.; Jacqz-Aigrain, E.
2012-01-01
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT: Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in
On the Problem of Permissible Covariance and Variogram Models
Christakos, George
1984-02-01
The covariance and variogram models (ordinary or generalized) are important statistical tools used in various estimation and simulation techniques which have been recently applied to diverse hydrologic problems. For example, the efficacy of kriging, a method for interpolating, filtering, or averaging spatial phenomena, depends, to a large extent, on the covariance or variogram model chosen. The aim of this article is to provide the users of these techniques with convenient criteria that may help them to judge whether a function which arises in a particular problem, and is not included among the known covariance or variogram models, is permissible as such a model. This is done by investigating the properties of the candidate model in both the space and frequency domains. In the present article this investigation covers stationary random functions as well as intrinsic random functions (i.e., nonstationary functions for which increments of some order are stationary). Then, based on the theoretical results obtained, a procedure is outlined and successfully applied to a number of candidate models. In order to give to this procedure a more practical context, we employ "stereological" equations that essentially transfer the investigations to one-dimensional space, together with approximations in terms of polygonal functions and Fourier-Bessel series expansions. There are many benefits and applications of such a procedure. Polygonal models can be fit arbitrarily closely to the data. Also, the approximation of a particular model in the frequency domain by a Fourier-Bessel series expansion can be very effective. This is shown by theory and by example.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K
2015-01-01
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...
Manifestly Gauge Covariant Formulation of Lattice Chiral Fermions
Okuyama, K; Okuyama, Kiyoshi; Suzuki, Hiroshi
1997-01-01
We propose a new formulation of chiral fermions on a lattice, on the basis of a lattice extension of the covariant regularization scheme in continuum field theory. The species doublers do not emerge. The real part of the effective action is just one half of that of Dirac-Wilson fermion and is always gauge invariant even with a finite lattice spacing. The gauge invariance of the imaginary part, on the other hand, sets a severe constraint which is a lattice analogue of the gauge anomaly free condition. For real gauge representations, the imaginary part identically vanishes and the gauge invariance becomes exact.
Parametric methods for estimating covariate-dependent reference limits.
Virtanen, Arja; Kairisto, Veli; Uusipaikka, Esa
2004-01-01
Age-specific reference limits are required for many clinical laboratory measurements. Statistical assessment of calculated intervals must be performed to obtain reliable reference limits. When parametric, covariate-dependent limits are derived, normal distribution theory usually is applied due to its mathematical simplicity and relative ease of fitting. However, it is not always possible to transform data and achieve a normal distribution. Therefore, models other than those based on normal distribution theory are needed. Generalized linear model theory offers one such alternative. Regardless of the statistical model used, the assumptions behind the model should always be examined.
Covariant effective action for a Galilean invariant quantum Hall system
Geracie, Michael; Prabhu, Kartik; Roberts, Matthew M.
2016-09-01
We construct effective field theories for gapped quantum Hall systems coupled to background geometries with local Galilean invariance i.e. Bargmann spacetimes. Along with an electromagnetic field, these backgrounds include the effects of curved Galilean spacetimes, including torsion and a gravitational field, allowing us to study charge, energy, stress and mass currents within a unified framework. A shift symmetry specific to single constituent theories constraints the effective action to couple to an effective background gauge field and spin connection that is solved for by a self-consistent equation, providing a manifestly covariant extension of Hoyos and Son's improvement terms to arbitrary order in m.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χ$2\\atop{n}$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W^{-1} is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$2\\atop{n}$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.
Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Second central extension in Galilean covariant field theory
Hagen, C R
2002-01-01
The second central extension of the planar Galilei group has been alleged to have its origin in the spin variable. This idea is explored here by considering local Galilean covariant field theory for free fields of arbitrary spin. It is shown that such systems generally display only a trivial realization of the second central extension. While it is possible to realize any desired value of the extension parameter by suitable redefinition of the boost operator, such an approach has no necessary connection to the spin of the basic underlying field.
General Covariance in Gravity at a Lifshitz Point
Horava, Petr
2011-01-01
This paper is based on the invited talks delivered by the author at GR 19: the 19th International Conference on General Relativity and Gravitation, Ciudad de M\\'exico, M\\'exico, July 2010. In Part 1, we briefly review some of the main features of quantum gravity with anisotropic scaling, and comment on its possible relation to the causal dynamical triangulations (CDT) approach to lattice quantum gravity. Part 2 explains the construction of gravity with anisotropic scaling with an extended gauge symmetry -- essentially a nonrelativistic version of general covariance. This extra symmetry eliminates the scalar graviton polarization, and thus brings the theory closer to general relativity at long distances.
(1)-covariant gauge for the two-Higgs doublet model
C G Honorato; J J Toscano
2009-12-01
A (1)-covariant gauge for the two-Higgs doublet model based on BRST (Becchi–Rouet–Stora–Tyutin) symmetry is introduced. This gauge allows one to remove a significant number of nonphysical vertices appearing in conventional linear gauges, which greatly simplifies the loop calculations, since the resultant theory satisfies QED-like Ward identities. The presence of four ghost interactions in these types of gauges and their connection with the BRST symmetry are stressed. The Feynman rules for those new vertices that arise in this gauge, as well as for those couplings already present in the linear gauge but that are modified by this gauge-fixing procedure, are presented.
Pairs in the light-front and covariance
Pacheco-Bicudo-Cabral de Melo, J; Frederico, T; Sauer, P U
1998-01-01
The electromagnetic current of bound systems in the light-front is constructed in the Breit-Frame, in the limit of momentum transfer $q^+=(q^0+q^3)$ vanishing. In this limit, the pair creation term survives and it is responsible for the covariance of the current. The pair creation term is computed for the $j^+$ current of a spin one composite particle in the Breit-frame. The rotational symmetry of $j^+$ is violated if the pair term is not considered.
Covariance of Light-Front Models Pair Current
Pacheco-Bicudo-Cabral de Melo, J; Naus, H W L; Sauer, P U
1999-01-01
We compute the "+" component of the electromagnetic current of a composite spin-one two-fermion system for vanishing momentum transfer component $q^+=q^0+q^3$. In particular, we extract the nonvanishing pair production amplitude on the light-front. It is a consequence of the longitudinal zero momentum mode, contributing to the light-front current in the Breit-frame. The covariance of the current is violated, if such pair terms are not included in its matrix elements. We illustrate our discussion with some numerical examples.
Statistical mechanics of covariant systems with multi-fingered time
Chirco, Goffredo
2016-01-01
Recently, in [Class. Quantum Grav. 33 (2016) 045005], the authors proposed a new approach extending the framework of statistical mechanics to reparametrization-invariant systems with no additional gauges. In this work, the approach is generalized to systems defined by more than one Hamiltonian constraints (multi-fingered time). We show how well known features as the Ehrenfest- Tolman effect and the J\\"uttner distribution for the relativistic gas can be consistently recovered from a covariant approach in the multi-fingered framework. Eventually, the crucial role played by the interaction in the definition of a global notion of equilibrium is discussed.
Representation of Gaussian semimartingales with applications to the covariance function
Basse-O'Connor, Andreas
2010-01-01
The present paper is concerned with various aspects of Gaussian semimartingales. Firstly, generalizing a result of Stricker, we provide a convenient representation of Gaussian semimartingales as an -semimartingale plus a process of bounded variation which is independent of M. Secondly, we study...... stationary Gaussian semimartingales and their canonical decomposition. Thirdly, we give a new characterization of the covariance function of Gaussian semimartingales, which enable us to characterize the class of martingales and the processes of bounded variation among the Gaussian semimartingales. We...
Using time-varying covariates in multilevel growth models
D. Betsy McCoach
2010-06-01
Full Text Available This article provides an illustration of growth curve modeling within a multilevel framework. Specifically, we demonstrate coding schemes that allow the researcher to model discontinuous longitudinal data using a linear growth model in conjunction with time varying covariates. Our focus is on developing a level-1 model that accurately reflects the shape of the growth trajectory. We demonstrate the importance of adequately modeling the shape of the level-1 growth trajectory in order to make inferences about the importance of both level-1 and level-2 predictors.
Gaussian Fluctuations for Sample Covariance Matrices with Dependent Data
Friesen, Olga; Stolz, Michael
2012-01-01
It is known (Hofmann-Credner and Stolz (2008)) that the convergence of the mean empirical spectral distribution of a sample covariance matrix W_n = 1/n Y_n Y_n^t to the Mar\\v{c}enko-Pastur law remains unaffected if the rows and columns of Y_n exhibit some dependence, where only the growth of the number of dependent entries, but not the joint distribution of dependent entries needs to be controlled. In this paper we show that the well-known CLT for traces of powers of W_n also extends to the dependent case.
Covariance biplot analysis of trace element concentrations in urinary stones.
Wandt, M A; Underhill, L G
1988-06-01
The covariance biplot, a relatively new technique for displaying multivariate data, was applied to trace element contents and compound concentrations of urinary stones. The biplot is demonstrated to give a compact graphical representation of the multivariate data with interpretations in terms of familiar statistical concepts such as correlations and standard deviations. It displays strong correlations between various trace elements like Zn and Sr, and Sr and Na. The biplot also suggests concentration relationships which could play a hitherto unknown role in the genesis of calculi. It is shown to help in the interpretation of analytical results as well as in exposing erroneous or incomplete analyses.
Poisson process Fock space representation, chaos expansion and covariance inequalities
Last, Guenter
2009-01-01
We consider a Poisson process $\\eta$ on an arbitrary measurable space with an arbitrary sigma-finite intensity measure. We establish an explicit Fock space representation of square integrable functions of $\\eta$. As a consequence we identify explicitly, in terms of iterated difference operators, the integrands in the Wiener-Ito chaos expansion. We apply these results to extend well-known variance inequalities for homogeneous Poisson processes on the line to the general Poisson case. The Poincare inequality is a special case. Further applications are covariance identities for Poisson processes on (strictly) ordered spaces and Harris-FKG-inequalities for monotone functions of $\\eta$.
GARCH modelling of covariance in dynamical estimation of inverse solutions
Galka, Andreas [Institute of Experimental and Applied Physics, University of Kiel, 24098 Kiel (Germany) and Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)]. E-mail: galka@physik.uni-kiel.de; Yamashita, Okito [ATR Computational Neuroscience Laboratories, Hikaridai 2-2-2, Kyoto 619-0288 (Japan); Ozaki, Tohru [Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)
2004-12-06
The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.
The covariant approach to LRS perfect fluid spacetime geometries
Van Elst, H; van Elst, Henk; Ellis, George F R
1995-01-01
The dynamics of perfect fluid spacetime geometries which exhibit {\\em Local Rotational Symmetry} (LRS) are reformulated in the language of a 1+\\,3 "threading" decomposition of the spacetime manifold, where covariant fluid and curvature variables are used. This approach presents a neat alternative to the orthonormal frame formalism. The dynamical equations reduce to a set of differential relations between purely scalar quantities. The consistency conditions are worked out in a transparent way. We discuss their various subcases in detail and focus in particular on models with higher symmetries within the class of expanding spatially inhomogeneous LRS models, via a consideration of functional dependencies between the dynamical variables.
Small area estimation with covariates perturbed for disclosure limitation
Silvia Polettini
2015-03-01
Full Text Available We exploit the connections between measurement error and data perturbation for disclosure limitation in the context of small area estimation. Our starting point is the model in Ybarra and Lohr (2008, where some of the covariates (all continuous are measured with error. Using a fully Bayesian approach, we extend the aforementioned model including continuous and categorical auxiliary variables, both possibily perturbed by disclosure limitation methods, with masking distributions fixed according to the assumed protection mechanism. In order to investigate the feasibility of the proposed method, we conduct a simulation study exploring the effect of different post-randomization scenarios on the small area model.
The Koszul-Tate Cohomology in Covariant Hamiltonian Formalism
Mangiarotti, L
1999-01-01
We show that, in the framework of covariant Hamiltonian field theory, a degenerate almost regular quadratic Lagrangian $L$ admits a complete set of non-degenerate Hamiltonian forms such that solutions of the corresponding Hamilton equations, which live in the Lagrangian constraint space, exhaust solutions of the Euler--Lagrange equations for $L$. We obtain the characteristic splittings of the configuration and momentum phase bundles. Due to the corresponding projection operators, the Koszul-Tate resolution of the Lagrangian constraints for a generic almost regular quadratic Lagrangian is constructed in an explicit form.
FBST for covariance structures of generalized Gompertz models
Maranhão, Viviane Teles de Lucca; Lauretto, Marcelo De Souza; Stern, Julio Michael
2012-10-01
The Gompertz distribution is commonly used in biology for modeling fatigue and mortality. This paper studies a class of models proposed by Adham and Walker, featuring a Gompertz type distribution where the dependence structure is modeled by a lognormal distribution, and develops a new multivariate formulation that facilitates several numerical and computational aspects. This paper also implements the FBST, the Full Bayesian Significance Test for pertinent sharp (precise) hypotheses on the lognormal covariance structure. The FBST's e-value, ev(H), gives the epistemic value of hypothesis, H, or the value of evidence in the observed in support of H.
Dunkl Operators as Covariant Derivatives in a Quantum Principal Bundle
Durdevich, Micho; Sontz, Stephen Bruce
2013-05-01
A quantum principal bundle is constructed for every Coxeter group acting on a finite-dimensional Euclidean space E, and then a connection is also defined on this bundle. The covariant derivatives associated to this connection are the Dunkl operators, originally introduced as part of a program to generalize harmonic analysis in Euclidean spaces. This gives us a new, geometric way of viewing the Dunkl operators. In particular, we present a new proof of the commutativity of these operators among themselves as a consequence of a geometric property, namely, that the connection has curvature zero.
Dunkl Operators as Covariant Derivatives in a Quantum Principal Bundle
Micho Đurđevich
2013-05-01
Full Text Available A quantum principal bundle is constructed for every Coxeter group acting on a finite-dimensional Euclidean space E, and then a connection is also defined on this bundle. The covariant derivatives associated to this connection are the Dunkl operators, originally introduced as part of a program to generalize harmonic analysis in Euclidean spaces. This gives us a new, geometric way of viewing the Dunkl operators. In particular, we present a new proof of the commutativity of these operators among themselves as a consequence of a geometric property, namely, that the connection has curvature zero.
Dunkl Operators as Covariant Derivatives in a Quantum Principal Bundle
evich, Micho Đurđ
2011-01-01
A quantum principal bundle is constructed for every Coxeter group acting on a finite-dimensional Euclidean space E, and then a connection is also defined on this bundle. The covariant derivatives associated to this connection are the Dunkl operators, originally introduced as part of a program to generalize harmonic analysis in Euclidean spaces. This gives us a new, geometric way of viewing the Dunkl operators. In particular, we present a new proof of the commutivity of these operators among themselves as a consequence of a geometric property, namely, that the connection has curvature zero.
Problems and Progress in Covariant High Spin Description
Kirchbach, Mariana
2016-01-01
A universal description of particles with spins j greater or equal one , transforming in (j,0)+(0,j), is developed by means of representation specific second order differential wave equations without auxiliary conditions and in covariant bases such as Lorentz tensors for bosons, Lorentz-tensors with Dirac spinor components for fermions, or, within the basis of the more fundamental Weyl-Van-der-Waerden sl(2,C) spinor-tensors. At the root of the method, which is free from the pathologies suffered by the traditional approaches, are projectors constructed from the Casimir invariants of the spin-Lorentz group, and the group of translations in the Minkowski space time.
Partially Linear Varying Coefficient Models Stratified by a Functional Covariate.
Maity, Arnab; Huang, Jianhua Z
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Problems and Progress in Covariant High Spin Description
Kirchbach, Mariana; Banda Guzmán, Víctor Miguel
2016-10-01
A universal description of particles with spins j > 1, transforming in (j, 0) ⊕ (0, j), is developed by means of representation specific second order differential wave equations without auxiliary conditions and in covariant bases such as Lorentz tensors for bosons, Lorentz-tensors with Dirac spinor components for fermions, or, within the basis of the more fundamental Weyl- Van-der-Waerden sl(2,C) spinor-tensors. At the root of the method, which is free from the pathologies suffered by the traditional approaches, are projectors constructed from the Casimir invariants of the spin-Lorentz group, and the group of translations in the Minkowski space time.
A covariance matrix test for high-dimensional data
Saowapha Chaipitak
2016-10-01
Full Text Available For the multivariate normally distributed data with the dimension larger than or equal to the number of observations, or the sample size, called high-dimensional normal data, we proposed a test for testing the null hypothesis that the covariance matrix of a normal population is proportional to a given matrix on some conditions when the dimension goes to infinity. We showed that this test statistic is consistent. The asymptotic null and non-null distribution of the test statistic is also given. The performance of the proposed test is evaluated via simulation study and its application.
Accuracy of Pseudo-Inverse Covariance Learning--A Random Matrix Theory Analysis.
Hoyle, David C
2011-07-01
For many learning problems, estimates of the inverse population covariance are required and often obtained by inverting the sample covariance matrix. Increasingly for modern scientific data sets, the number of sample points is less than the number of features and so the sample covariance is not invertible. In such circumstances, the Moore-Penrose pseudo-inverse sample covariance matrix, constructed from the eigenvectors corresponding to nonzero sample covariance eigenvalues, is often used as an approximation to the inverse population covariance matrix. The reconstruction error of the pseudo-inverse sample covariance matrix in estimating the true inverse covariance can be quantified via the Frobenius norm of the difference between the two. The reconstruction error is dominated by the smallest nonzero sample covariance eigenvalues and diverges as the sample size becomes comparable to the number of features. For high-dimensional data, we use random matrix theory techniques and results to study the reconstruction error for a wide class of population covariance matrices. We also show how bagging and random subspace methods can result in a reduction in the reconstruction error and can be combined to improve the accuracy of classifiers that utilize the pseudo-inverse sample covariance matrix. We test our analysis on both simulated and benchmark data sets.
Agilan, V.; Umamahesh, N. V.
2017-03-01
Present infrastructure design is primarily based on rainfall Intensity-Duration-Frequency (IDF) curves with so-called stationary assumption. However, in recent years, the extreme precipitation events are increasing due to global climate change and creating non-stationarity in the series. Based on recent theoretical developments in the Extreme Value Theory (EVT), recent studies proposed a methodology for developing non-stationary rainfall IDF curve by incorporating trend in the parameters of the Generalized Extreme Value (GEV) distribution using Time covariate. But, the covariate Time may not be the best covariate and it is important to analyze all possible covariates and find the best covariate to model non-stationarity. In this study, five physical processes, namely, urbanization, local temperature changes, global warming, El Niño-Southern Oscillation (ENSO) cycle and Indian Ocean Dipole (IOD) are used as covariates. Based on these five covariates and their possible combinations, sixty-two non-stationary GEV models are constructed. In addition, two non-stationary GEV models based on Time covariate and one stationary GEV model are also constructed. The best model for each duration rainfall series is chosen based on the corrected Akaike Information Criterion (AICc). From the findings of this study, it is observed that the local processes (i.e., Urbanization, local temperature changes) are the best covariate for short duration rainfall and global processes (i.e., Global warming, ENSO cycle and IOD) are the best covariate for the long duration rainfall of the Hyderabad city, India. Furthermore, the covariate Time is never qualified as the best covariate. In addition, the identified best covariates are further used to develop non-stationary rainfall IDF curves of the Hyderabad city. The proposed methodology can be applied to other situations to develop the non-stationary IDF curves based on the best covariate.
One-loop Matching and Running with Covariant Derivative Expansion
Henning, Brian; Murayama, Hitoshi
2016-01-01
We develop tools for performing effective field theory (EFT) calculations in a manifestly gauge-covariant fashion. We clarify how functional methods account for one-loop diagrams resulting from the exchange of both heavy and light fields, as some confusion has recently arisen in the literature. To efficiently evaluate functional traces containing these "mixed" one-loop terms, we develop a new covariant derivative expansion (CDE) technique that is capable of evaluating a much wider class of traces than previous methods. The technique is detailed in an appendix, so that it can be read independently from the rest of this work. We review the well-known matching procedure to one-loop order with functional methods. What we add to this story is showing how to isolate one-loop terms coming from diagrams involving only heavy propagators from diagrams with mixed heavy and light propagators. This is done using a non-local effective action, which physically connects to the notion of "integrating out" heavy fields. Lastly...
Covariance analysis of differential drag-based satellite cluster flight
Ben-Yaacov, Ohad; Ivantsov, Anatoly; Gurfil, Pini
2016-06-01
One possibility for satellite cluster flight is to control relative distances using differential drag. The idea is to increase or decrease the drag acceleration on each satellite by changing its attitude, and use the resulting small differential acceleration as a controller. The most significant advantage of the differential drag concept is that it enables cluster flight without consuming fuel. However, any drag-based control algorithm must cope with significant aerodynamical and mechanical uncertainties. The goal of the current paper is to develop a method for examination of the differential drag-based cluster flight performance in the presence of noise and uncertainties. In particular, the differential drag control law is examined under measurement noise, drag uncertainties, and initial condition-related uncertainties. The method used for uncertainty quantification is the Linear Covariance Analysis, which enables us to propagate the augmented state and filter covariance without propagating the state itself. Validation using a Monte-Carlo simulation is provided. The results show that all uncertainties have relatively small effect on the inter-satellite distance, even in the long term, which validates the robustness of the used differential drag controller.
Eigenvectors of some large sample covariance matrix ensembles
Ledoit, Olivier
2009-01-01
We consider sample covariance matrices $S_N=\\frac{1}{p}\\Sigma_N^{1/2}X_NX_N^* \\Sigma_N^{1/2}$ where $X_N$ is a $N \\times p$ real or complex matrix with i.i.d. entries with finite $12^{\\rm th}$ moment and $\\Sigma_N$ is a $N \\times N$ positive definite matrix. In addition we assume that the spectral measure of $\\Sigma_N$ almost surely converges to some limiting probability distribution as $N \\to \\infty$ and $p/N \\to \\gamma >0.$ We quantify the relationship between sample and population eigenvectors by studying the asymptotics of functionals of the type $\\frac{1}{N} \\text{Tr} (g(\\Sigma_N) (S_N-zI)^{-1})),$ where $I$ is the identity matrix, $g$ is a bounded function and $z$ is a complex number. This is then used to compute the asymptotically optimal bias correction for sample eigenvalues, paving the way for a new generation of improved estimators of the covariance matrix and its inverse.
Covariant quantization of C P T -violating photons
Colladay, D.; McDonald, P.; Noordmans, J. P.; Potting, R.
2017-01-01
We perform the covariant canonical quantization of the C P T - and Lorentz-symmetry-violating photon sector of the minimal Standard-Model Extension, which contains a general (timelike, lightlike, or spacelike) fixed background tensor kAF μ. Well-known stability issues, arising from complex-valued energy states, are solved by introducing a small photon mass, orders of magnitude below current experimental bounds. We explicitly construct a covariant basis of polarization vectors, in which the photon field can be expanded. We proceed to derive the Feynman propagator and show that the theory is microcausal. Despite the occurrence of negative energies and vacuum-Cherenkov radiation, we do not find any runaway stability issues, because the energy remains bounded from below. An important observation is that the ordering of the roots of the dispersion relations is the same in any observer frame, which allows for a frame-independent condition that selects the correct branch of the dispersion relation. This turns out to be critical for the consistency of the quantization. To our knowledge, this is the first system for which quantization has consistently been performed, in spite of the fact that the theory contains negative energies in some observer frames.
Anisotropic k-Nearest Neighbor Search Using Covariance Quadtree
Marinho, Eraldo Pereira
2011-01-01
We present a variant of the hyper-quadtree that divides a multidimensional space according to the hyperplanes associated to the principal components of the data in each hyperquadrant. Each of the $2^\\lambda$ hyper-quadrants is a data partition in a $\\lambda$-dimension subspace, whose intrinsic dimensionality $\\lambda\\leq d$ is reduced from the root dimensionality $d$ by the principal components analysis, which discards the irrelevant eigenvalues of the local covariance matrix. In the present method a component is irrelevant if its length is smaller than, or comparable to, the local inter-data spacing. Thus, the covariance hyper-quadtree is fully adaptive to the local dimensionality. The proposed data-structure is used to compute the anisotropic K nearest neighbors (kNN), supported by the Mahalanobis metric. As an application, we used the present k nearest neighbors method to perform density estimation over a noisy data distribution. Such estimation method can be further incorporated to the smoothed particle h...
Hydrographic responses to regional covariates across the Kara Sea
Mäkinen, Jussi; Vanhatalo, Jarno
2016-12-01
The Kara Sea is a shelf sea in the Arctic Ocean which has a strong spatiotemporal hydrographic variation driven by river discharge, air pressure, and sea ice. There is a lack of information about the effects of environmental variables on surface hydrography in different regions of the Kara Sea. We use a hierarchical spatially varying coefficient model to study the variation of sea surface temperature (SST) and salinity (SSS) in the Kara Sea between years 1980 and 2000. The model allows us to study the effects of climatic (Arctic oscillation index (AO)) and seasonal (river discharge and ice concentration) environmental covariates on hydrography. The hydrographic responses to covariates vary considerably between different regions of the Kara Sea. River discharge decreases SSS in the shallow shelf area and has a neutral effect in the northern Kara Sea. The responses of SST and SSS to AO show the effects of different wind and air pressure conditions on water circulation and hence on hydrography. Ice concentration has a constant effect across the Kara Sea. We estimated the average SST and SSS in the Kara Sea in 1980-2000. The average August SST over the Kara Sea in 1995-2000 was higher than the respective average in 1980-1984 with 99.9% probability and August SSS decreased with 77% probability between these time periods. We found a support that the winter season AO has an impact on the summer season hydrography, and temporal trends may be related to the varying level of winter season AO index.
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Abnormalities in Structural Covariance of Cortical Gyrification in Parkinson's Disease
Xu, Jinping; Zhang, Jiuquan; Zhang, Jinlei; Wang, Yue; Zhang, Yanling; Wang, Jian; Li, Guanglin; Hu, Qingmao; Zhang, Yuanchao
2017-01-01
Although abnormal cortical morphology and connectivity between brain regions (structural covariance) have been reported in Parkinson's disease (PD), the topological organizations of large-scale structural brain networks are still poorly understood. In this study, we investigated large-scale structural brain networks in a sample of 37 PD patients and 34 healthy controls (HC) by assessing the structural covariance of cortical gyrification with local gyrification index (lGI). We demonstrated prominent small-world properties of the structural brain networks for both groups. Compared with the HC group, PD patients showed significantly increased integrated characteristic path length and integrated clustering coefficient, as well as decreased integrated global efficiency in structural brain networks. Distinct distributions of hub regions were identified between the two groups, showing more hub regions in the frontal cortex in PD patients. Moreover, the modular analyses revealed significantly decreased integrated regional efficiency in lateral Fronto-Insula-Temporal module, and increased integrated regional efficiency in Parieto-Temporal module in the PD group as compared to the HC group. In summary, our study demonstrated altered topological properties of structural networks at a global, regional and modular level in PD patients. These findings suggests that the structural networks of PD patients have a suboptimal topological organization, resulting in less effective integration of information between brain regions.
Implementing phase-covariant cloning in circuit quantum electrodynamics
Zhu, Meng-Zheng [School of Physics and Material Science, Anhui University, Hefei 230039 (China); School of Physics and Electronic Information, Huaibei Normal University, Huaibei 235000 (China); Ye, Liu, E-mail: yeliu@ahu.edu.cn [School of Physics and Material Science, Anhui University, Hefei 230039 (China)
2016-10-15
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC) transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.
REFINING GENETICALLY INFERRED RELATIONSHIPS USING TREELET COVARIANCE SMOOTHING1
Crossett, Andrew; Lee, Ann B.; Klei, Lambertus; Devlin, Bernie; Roeder, Kathryn
2013-01-01
Recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease, but much is left to discover. A common thread to most genetic investigations is familial relationships. Close relatives can be identified from family records, and more distant relatives can be inferred from large panels of genetic markers. Unfortunately these empirical estimates can be noisy, especially regarding distant relatives. We propose a new method for denoising genetically—inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals. The approach, which we call Treelet Covariance Smoothing, employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships. On both simulated and real data, we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals. We illustrate our method with a large genome-wide association study and estimate the “heritability” of body mass index quite accurately. Traditionally heritability, defined as the fraction of the total trait variance attributable to additive genetic effects, is estimated from samples of closely related individuals using random effects models. We show that by using smoothed relationship matrices we can estimate heritability using population-based samples. Finally, while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability, they have much broader potential application in statistics. Most notably, for error-in-variables random effects models and settings that require regularization of matrices with block or hierarchical structure. PMID:24587841
REFINING GENETICALLY INFERRED RELATIONSHIPS USING TREELET COVARIANCE SMOOTHING.
Crossett, Andrew; Lee, Ann B; Klei, Lambertus; Devlin, Bernie; Roeder, Kathryn
2013-06-27
Recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease, but much is left to discover. A common thread to most genetic investigations is familial relationships. Close relatives can be identified from family records, and more distant relatives can be inferred from large panels of genetic markers. Unfortunately these empirical estimates can be noisy, especially regarding distant relatives. We propose a new method for denoising genetically-inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals. The approach, which we call Treelet Covariance Smoothing, employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships. On both simulated and real data, we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals. We illustrate our method with a large genome-wide association study and estimate the "heritability" of body mass index quite accurately. Traditionally heritability, defined as the fraction of the total trait variance attributable to additive genetic effects, is estimated from samples of closely related individuals using random effects models. We show that by using smoothed relationship matrices we can estimate heritability using population-based samples. Finally, while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability, they have much broader potential application in statistics. Most notably, for error-in-variables random effects models and settings that require regularization of matrices with block or hierarchical structure.
USING COVARIANCE MATRIX FOR CHANGE DETECTION OF POLARIMETRIC SAR DATA
M. Esmaeilzade
2017-09-01
Full Text Available Nowadays change detection is an important role in civil and military fields. The Synthetic Aperture Radar (SAR images due to its independent of atmospheric conditions and cloud cover, have attracted much attention in the change detection applications. When the SAR data are used, one of the appropriate ways to display the backscattered signal is using covariance matrix that follows the Wishart distribution. Based on this distribution a statistical test for equality of two complex variance-covariance matrices can be used. In this study, two full polarization data in band L from UAVSAR are used for change detection in agricultural fields and urban areas in the region of United States which the first image belong to 2014 and the second one is from 2017. To investigate the effect of polarization on the rate of change, full polarization data and dual polarization data were used and the results were compared. According to the results, full polarization shows more changes than dual polarization.
Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli
Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.
2013-01-01
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563
Spike triggered covariance in strongly correlated gaussian stimuli.
Johnatan Aljadeff
Full Text Available Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC. This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s. Analyzing the responses of retinal ganglion cells probed with [Formula: see text] Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons.
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
Implementing phase-covariant cloning in circuit quantum electrodynamics
Zhu, Meng-Zheng; Ye, Liu
2016-10-01
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC) transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.
Batalin-Vilkovisky formalism in locally covariant field theory
Rejzner, Katarzyna Anna
2011-12-15
The present work contains a complete formulation of the Batalin-Vilkovisky (BV) formalism in the framework of locally covariant field theory. In the first part of the thesis the classical theory is investigated with a particular focus on the infinite dimensional character of the underlying structures. It is shown that the use of infinite dimensional differential geometry allows for a conceptually clear and elegant formulation. The construction of the BV complex is performed in a fully covariant way and we also generalize the BV framework to a more abstract level, using functors and natural transformations. In this setting we construct the BV complex for classical gravity. This allows us to give a homological interpretation to the notion of diffeomorphism invariant physical quantities in general relativity. The second part of the thesis concerns the quantum theory. We provide a framework for the BV quantization that doesn't rely on the path integral formalism, but is completely formulated within perturbative algebraic quantum field theory. To make such a formulation possible we first prove that the renormalized time-ordered product can be understood as a binary operation on a suitable domain. Using this result we prove the associativity of this product and provide a consistent framework for the renormalized BV structures. In particular the renormalized quantum master equation and the renormalized quantum BV operator are defined. To give a precise meaning to theses objects we make a use of the master Ward identity, which is an important structure in causal perturbation theory. (orig.)
Some observations on interpolating gauges and non-covariant gauges
Satish D Joglekar
2003-11-01
We discuss the viability of using interpolating gauges to deﬁne the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition deﬁning term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter varies, depends very sensitively on the parameter variation. We do this with a gauge used by Doust. We also consider the Lagrangian path-integrals in Minkowski space for gauges with a residual gauge-invariance. We point out the necessity of inclusion of an -term (even) in the formal treatments, without which one may reach incorrect conclusions. We, further, point out that the -term can contribute to the BRST WT-identities in a non-trivial way (even as → 0). We point out that these contributions lead to additional constraints on Green’s function that are not normally taken into account in the BRST formalism that ignores the -term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable.
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70