Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
ANL Critical Assembly Covariance Matrix Generation
Energy Technology Data Exchange (ETDEWEB)
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-15
This report discusses the generation of a covariance matrix for selected critical assemblies that were carried out by Argonne National Laboratory (ANL) using four critical facilities-all of which are now decommissioned. The four different ANL critical facilities are: ZPR-3 located at ANL-West (now Idaho National Laboratory- INL), ZPR-6 and ZPR-9 located at ANL-East (Illinois) and ZPPr located at ANL-West.
The Performance Analysis Based on SAR Sample Covariance Matrix
Directory of Open Access Journals (Sweden)
Esra Erten
2012-03-01
Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.
Empirical State Error Covariance Matrix for Batch Estimation
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Neutron Resonance Parameters and Covariance Matrix of 239Pu
Energy Technology Data Exchange (ETDEWEB)
Derrien, Herve [ORNL; Leal, Luiz C [ORNL; Larson, Nancy M [ORNL
2008-08-01
In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χ$2\\atop{n}$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W^{-1} is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$2\\atop{n}$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.
Gini covariance matrix and its affine equivariant version
Weatherall, Lauren Anne
Gini's mean difference (GMD) and its derivatives such as Gini index have been widely used as alternative measures of variability over one century in many research fields especially in finance, economics and social welfare. In this dissertation, we generalize the univariate GMD to the multivariate case and propose a new covariance matrix so called the Gini covariance matrix (GCM). The extension is natural, which is based on the covariance representation of GMD with the notion of multivariate spatial rank function. In order to gain the affine equivariance property for GCM, we utilize the transformation-retransformation (TR) technique and obtain TR version GCM that turns out to be a symmetrized M-functional. Indeed, both GCMs are symmetrized approaches based on the difference of two independent variables without reference of a location, hence avoiding some arbitrary definition of location for non-symmetric distributions. We study the properties of both GCMs. They possess the so-called independence property, which is highly important, for example, in independent component analysis. Influence functions of two GCMs are derived to assess their robustness. They are found to be more robust than the regular covariance matrix but less robust than Tyler and Dumbgen M-functional. Under elliptical distributions, the relationship between the scatter parameter and the two GCM are obtained. With this relationship, principal component analysis (PCA) based on GCM is possible. Estimation of two GCMs is presented. We study asymptotical behavior of the estimators. √n-consistency and asymptotical normality of estimators are established. Asymptotic relative efficiency (ARE) of TR-GCM estimator with respect to sample covariance matrix is compared to that of Tyler and Dumbgen M-estimators. With little loss on efficiency (UCI machine learning repository. Relying on some graphical and numerical summaries, Gini-based PCA demonstrates its competitive performance.
MIMO Radar Transmit Beampattern Design Without Synthesising the Covariance Matrix
Ahmed, Sajid
2013-10-28
Compared to phased-array, multiple-input multiple-output (MIMO) radars provide more degrees-offreedom (DOF) that can be exploited for improved spatial resolution, better parametric identifiability, lower side-lobe levels at the transmitter/receiver, and design variety of transmit beampatterns. The design of the transmit beampattern generally requires the waveforms to have arbitrary auto- and crosscorrelation properties. The generation of such waveforms is a two step complicated process. In the first step a waveform covariance matrix is synthesised, which is a constrained optimisation problem. In the second step, to realise this covariance matrix actual waveforms are designed, which is also a constrained optimisation problem. Our proposed scheme converts this two step constrained optimisation problem into a one step unconstrained optimisation problem. In the proposed scheme, in contrast to synthesising the covariance matrix for the desired beampattern, nT independent finite-alphabet constantenvelope waveforms are generated and pre-processed, with weight matrix W, before transmitting from the antennas. In this work, two weight matrices are proposed that can be easily optimised for the desired symmetric and non-symmetric beampatterns and guarantee equal average power transmission from each antenna. Simulation results validate our claims.
Using Covariance Matrix for Change Detection of Polarimetric SAR Data
Esmaeilzade, M.; Jahani, F.; Amini, J.
2017-09-01
Nowadays change detection is an important role in civil and military fields. The Synthetic Aperture Radar (SAR) images due to its independent of atmospheric conditions and cloud cover, have attracted much attention in the change detection applications. When the SAR data are used, one of the appropriate ways to display the backscattered signal is using covariance matrix that follows the Wishart distribution. Based on this distribution a statistical test for equality of two complex variance-covariance matrices can be used. In this study, two full polarization data in band L from UAVSAR are used for change detection in agricultural fields and urban areas in the region of United States which the first image belong to 2014 and the second one is from 2017. To investigate the effect of polarization on the rate of change, full polarization data and dual polarization data were used and the results were compared. According to the results, full polarization shows more changes than dual polarization.
Analysis of gene set using shrinkage covariance matrix approach
Karjanto, Suryaefiza; Aripin, Rasimah
2013-09-01
Microarray methodology has been exploited for different applications such as gene discovery and disease diagnosis. This technology is also used for quantitative and highly parallel measurements of gene expression. Recently, microarrays have been one of main interests of statisticians because they provide a perfect example of the paradigms of modern statistics. In this study, the alternative approach to estimate the covariance matrix has been proposed to solve the high dimensionality problem in microarrays. The extension of traditional Hotelling's T2 statistic is constructed for determining the significant gene sets across experimental conditions using shrinkage approach. Real data sets were used as illustrations to compare the performance of the proposed methods with other methods. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.
USING COVARIANCE MATRIX FOR CHANGE DETECTION OF POLARIMETRIC SAR DATA
Directory of Open Access Journals (Sweden)
M. Esmaeilzade
2017-09-01
Full Text Available Nowadays change detection is an important role in civil and military fields. The Synthetic Aperture Radar (SAR images due to its independent of atmospheric conditions and cloud cover, have attracted much attention in the change detection applications. When the SAR data are used, one of the appropriate ways to display the backscattered signal is using covariance matrix that follows the Wishart distribution. Based on this distribution a statistical test for equality of two complex variance-covariance matrices can be used. In this study, two full polarization data in band L from UAVSAR are used for change detection in agricultural fields and urban areas in the region of United States which the first image belong to 2014 and the second one is from 2017. To investigate the effect of polarization on the rate of change, full polarization data and dual polarization data were used and the results were compared. According to the results, full polarization shows more changes than dual polarization.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
DEFF Research Database (Denmark)
Yang, Yukay
I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...
CSIR Research Space (South Africa)
Herselman, PL
2008-09-01
Full Text Available Asymptotically optimal coherent detection techniques yield sub-clutter visibility in heavy-tailed sea clutter. The adaptive linear quadratic detector inherently assumes spectral homogeneity for the reference window of the covariance matrix estimator...
Energy Technology Data Exchange (ETDEWEB)
Theiler, James P [Los Alamos National Laboratory; Cao, Guangzhi [PURDUE UNIV; Bouman, Charles A [PURDUE UNIV
2009-01-01
Many detection algorithms in hyperspectral image analysis, from well-characterized gaseous and solid targets to deliberately uncharacterized anomalies and anomlous changes, depend on accurately estimating the covariance matrix of the background. In practice, the background covariance is estimated from samples in the image, and imprecision in this estimate can lead to a loss of detection power. In this paper, we describe the sparse matrix transform (SMT) and investigate its utility for estimating the covariance matrix from a limited number of samples. The SMT is formed by a product of pairwise coordinate (Givens) rotations, which can be efficiently estimated using greedy optimization. Experiments on hyperspectral data show that the estimate accurately reproduces even small eigenvalues and eigenvectors. In particular, we find that using the SMT to estimate the covariance matrix used in the adaptive matched filter leads to consistently higher signal-to-noise ratios.
ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters
Litvinenko, Alexander
2016-10-25
In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)
Construction and use of gene expression covariation matrix
Directory of Open Access Journals (Sweden)
Bellis Michel
2009-07-01
Full Text Available Abstract Background One essential step in the massive analysis of transcriptomic profiles is the calculation of the correlation coefficient, a value used to select pairs of genes with similar or inverse transcriptional profiles across a large fraction of the biological conditions examined. Until now, the choice between the two available methods for calculating the coefficient has been dictated mainly by technological considerations. Specifically, in analyses based on double-channel techniques, researchers have been required to use covariation correlation, i.e. the correlation between gene expression changes measured between several pairs of biological conditions, expressed for example as fold-change. In contrast, in analyses of single-channel techniques scientists have been restricted to the use of coexpression correlation, i.e. correlation between gene expression levels. To our knowledge, nobody has ever examined the possible benefits of using covariation instead of coexpression in massive analyses of single channel microarray results. Results We describe here how single-channel techniques can be treated like double-channel techniques and used to generate both gene expression changes and covariation measures. We also present a new method that allows the calculation of both positive and negative correlation coefficients between genes. First, we perform systematic comparisons between two given biological conditions and classify, for each comparison, genes as increased (I, decreased (D, or not changed (N. As a result, the original series of n gene expression level measures assigned to each gene is replaced by an ordered string of n(n-1/2 symbols, e.g. IDDNNIDID....DNNNNNNID, with the length of the string corresponding to the number of comparisons. In a second step, positive and negative covariation matrices (CVM are constructed by calculating statistically significant positive or negative correlation scores for any pair of genes by comparing their
Tyler's Covariance Matrix Estimator in Elliptical Models With Convex Structure
Soloveychik, Ilya; Wiesel, Ami
2014-10-01
We address structured covariance estimation in elliptical distributions by assuming that the covariance is a priori known to belong to a given convex set, e.g., the set of Toeplitz or banded matrices. We consider the General Method of Moments (GMM) optimization applied to robust Tyler's scatter M-estimator subject to these convex constraints. Unfortunately, GMM turns out to be non-convex due to the objective. Instead, we propose a new COCA estimator - a convex relaxation which can be efficiently solved. We prove that the relaxation is tight in the unconstrained case for a finite number of samples, and in the constrained case asymptotically. We then illustrate the advantages of COCA in synthetic simulations with structured compound Gaussian distributions. In these examples, COCA outperforms competing methods such as Tyler's estimator and its projection onto the structure set.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series
DEFF Research Database (Denmark)
Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier
2016-01-01
In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...
Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals...... to an amplitude and phase estimation (APES) based filter. For a fixed number of samples, the performance in terms of signal-to-noise ratio can be increased by using the IAA method, whereas if the filter size is fixed and the number of samples in the APES based filter is increased, the APES based filter performs...
Positive semidefinite integrated covariance estimation, factorizations and asynchronicity
DEFF Research Database (Denmark)
Sauri, Orimar; Lunde, Asger; Laurent, Sébastien
2017-01-01
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many...... observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and confirm their good finite sample properties by means of a Monte Carlo simulation. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We...
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Litvinenko, Alexander
2017-09-24
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
DEFF Research Database (Denmark)
Janssen, Anja; Mikosch, Thomas Valentin; Rezapour, Mohsen
2017-01-01
of the sample covariance matrix. While we show that in the case of heavy-tailed innovations the limiting behavior resembles that of completely independent observations, we also derive that in the case of a heavy-tailed volatility sequence the possible limiting behavior is more diverse, i.e. allowing...
CSIR Research Space (South Africa)
Salmon, BP
2012-07-01
Full Text Available for each incoming observation vector by predict- ing and updating the vector. In the prediction step, the state vector x(kjk 1);b and internal covariance matrix B(kjk 1);b is predicted. The predicted state vector?s estimate is computed as x^(kjk 1);b... = f x^(k 1jk 1);b ; (5) and the predicted internal covariance matrix B(kjk 1);b is computed as B(kjk 1);b = Q(k 1);b + FB(k 1jk 1);kF T: (6) The matrix F is the local linearization of the non-linear tran- sition function f . In the updating...
Ex-post evaluation of tax legislation in the Netherlands
S.J.C. Hemels (Sigrid)
2011-01-01
textabstractIntroduction Since the end of the 20th century, ex-post evaluation of tax legislation has consistently been part of the agenda of the Dutch government. In 2005, the 2001 Income tax Act was evaluated. In addition, several tax expenditures are evaluated each year. Tax expenditures can be a
Directory of Open Access Journals (Sweden)
K. Karthikeyan
2012-10-01
Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.
The Masked Sample Covariance Estimator: An Analysis via the Matrix Laplace Transform
2012-02-01
to focus on estimating only the signi cant entries. To analyze this approach, Levina and Vershynin (2011) introduce a formalism called masked...contrast to the sample complexity n = O(B log5 p) obtained by Levina and Vershynin. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...covariance matrix is nearly sparse and to focus on estimating only the significant entries. To analyze this approach, Levina and Vershynin (2011) introduce
Cloud-Based DDoS HTTP Attack Detection Using Covariance Matrix Approach
Directory of Open Access Journals (Sweden)
Abdulaziz Aborujilah
2017-01-01
Full Text Available In this era of technology, cloud computing technology has become essential part of the IT services used the daily life. In this regard, website hosting services are gradually moving to the cloud. This adds new valued feature to the cloud-based websites and at the same time introduces new threats for such services. DDoS attack is one such serious threat. Covariance matrix approach is used in this article to detect such attacks. The results were encouraging, according to confusion matrix and ROC descriptors.
Kenya; Ex Post Assessment of Longer-Term Program Engagement
International Monetary Fund
2008-01-01
This paper discusses key findings of the Ex Post Assessment (EPA) of Longer-Term Program Engagement paper for Kenya. This EPA focuses on 1993–2007, when Kenya was engaged in four successive IMF arrangements. Macroeconomic policy design was broadly appropriate, and implementation was generally sound. Growth slowed in the 1990s, but picked up after the 2002 elections, reflecting buoyant global conditions, structural reforms, and a surge of private capital inflows. Monetary policies were complic...
Covariances of nuclear matrix elements for O{nu}{beta}{beta} decay
Energy Technology Data Exchange (ETDEWEB)
Fogli, G L; Rotunno, A M [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' , Via Orabona 4, 70126 Bari (Italy); Lisi, E, E-mail: annamaria.rotunno@ba.infn.i [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari (Italy)
2010-01-01
Estimates of nuclear matrix elements (NME) for neutrinoless double beta decay (O{nu}{beta}{beta}) based on the quasiparticle random phase approximations (QRPA) are affected by theoretical uncertainties, which may play a dominant role in comparison with projected experimental errors of future O{nu}{beta}{beta} experiments. We discuss the estimated variances and covariances of NME of several candidate nuclei within the QRPA, focusing on the following aspects: 1) the comparison of O{nu}{beta}{beta} signals, or limits, in different nuclei; 2) the prospects for testing nonstandard O{nu}2{beta} mechanisms in future experiments.
Montier, L.; Plaszczynski, S.; Levrier, F.; Tristram, M.; Alina, D.; Ristorcelli, I.; Bernard, J.-P.
2015-02-01
With the forthcoming release of high precision polarization measurements, such as from the Planck satellite, the metrology of polarization needs to be improved. In particular, it is important to have full knowledge of the noise properties when estimating polarization fraction and polarization angle, which suffer from well-known biases. While strong simplifying assumptions have usually been made in polarization analysis, we present a method for including the full covariance matrix of the Stokes parameters in estimates of the distributions of the polarization fraction and angle. We thereby quantified the impact of the noise properties on the biases in the observational quantities and derived analytical expressions for the probability density functions of these quantities that take the full complexity of the covariance matrix into account, including the Stokes I intensity components. We performed Monte Carlo simulations to explore the impact of the noise properties on the statistical variance and bias of the polarization fraction and angle. We show that for low variations (< 10%) of the effective ellipticity between the Q and U components around the symmetrical case the covariance matrix may be simplified as is usually done, with a negligible impact on the bias. For S/Ns with intensity lower than 10, the uncertainty on the total intensity is shown to drastically increase the uncertainty of the polarization fraction but not the relative bias of the polarization fraction, while a 10% correlation between the intensity and the polarized components does not significantly affect the bias of the polarization fraction. We compare estimates of the uncertainties that affect polarization measurements, addressing limitations of the estimates of the S/N, and we show how to build conservative confidence intervals for polarization fraction and angle simultaneously. This study, which is the first in a set of papers dedicated to analysing polarization measurements, focuses on the
Ex-post evaluations of demand forecast accuracy
DEFF Research Database (Denmark)
Nicolaisen, Morten Skou; Driscoll, Patrick Arthur
2014-01-01
of the largest ex-post studies of demand forecast accuracy for transport infrastructure projects. The focus is twofold; to provide an overview of observed levels of demand forecast inaccuracy and to explore the primary explanations offered for the observed inaccuracy. Inaccuracy in the form of both bias......Travel demand forecasts play a crucial role in the preparation of decision support to policy makers in the field of transport planning. The results feed directly into impact appraisals such as cost benefit analyses and environmental impact assessments, which are mandatory for large public works...
Directory of Open Access Journals (Sweden)
Clarence C. Y. Kwan
2010-07-01
Full Text Available This study considers, from a pedagogic perspective, a crucial requirement for the covariance matrix of security returns in mean-variance portfolio analysis. Although the requirement that the covariance matrix be positive definite is fundamental in modern finance, it has not received any attention in standard investment textbooks. Being unaware of the requirement could cause confusion for students over some strange portfolio results that are based on seemingly reasonable input parameters. This study considers the requirement both informally and analytically. Electronic spreadsheet tools for constrained optimization and basic matrix operations are utilized to illustrate the various concepts involved.
Directory of Open Access Journals (Sweden)
Sivananaithaperumal Sudalaiandi
2014-06-01
Full Text Available This paper presents an automatic tuning of multivariable Fractional-Order Proportional, Integral and Derivative controller (FO-PID parameters using Covariance Matrix Adaptation Evolution Strategy (CMAES algorithm. Decoupled multivariable FO-PI and FO-PID controller structures are considered. Oustaloup integer order approximation is used for the fractional integrals and derivatives. For validation, two Multi-Input Multi- Output (MIMO distillation columns described byWood and Berry and Ogunnaike and Ray are considered for the design of multivariable FO-PID controller. Optimal FO-PID controller is designed by minimizing Integral Absolute Error (IAE as objective function. The results of previously reported PI/PID controller are considered for comparison purposes. Simulation results reveal that the performance of FOPI and FO-PID controller is better than integer order PI/PID controller in terms of IAE. Also, CMAES algorithm is suitable for the design of FO-PI / FO-PID controller.
Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data
Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.
2011-09-01
Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.
Delegation Principle for Multi-agency Games under Ex Post Equilibrium
Yu Chen; Zhenhua Wu
2012-01-01
We explore the strategic equivalence of the delegated menu contracting procedure in pure-strategy multi-agency games under ex post equilibrium. Our model setup permits "full-blown interdependence," including information externality, contract externality, correlated types, and primitive constraints across the contracts for different agents. Our delegation principle identifies that (optimal) ex post menu design is strategically equivalent to (optimal) bilateral ex post mechanism design, which s...
Pre-processing ambient noise cross-correlations with equalizing the covariance matrix eigenspectrum
Seydoux, Léonard; de Rosny, Julien; Shapiro, Nikolai M.
2017-09-01
Passive imaging techniques from ambient seismic noise requires a nearly isotropic distribution of the noise sources in order to ensure reliable traveltime measurements between seismic stations. However, real ambient seismic noise often partially fulfils this condition. It is generated in preferential areas (in deep ocean or near continental shores), and some highly coherent pulse-like signals may be present in the data such as those generated by earthquakes. Several pre-processing techniques have been developed in order to attenuate the directional and deterministic behaviour of this real ambient noise. Most of them are applied to individual seismograms before cross-correlation computation. The most widely used techniques are the spectral whitening and temporal smoothing of the individual seismic traces. We here propose an additional pre-processing to be used together with the classical ones, which is based on the spatial analysis of the seismic wavefield. We compute the cross-spectra between all available stations pairs in spectral domain, leading to the data covariance matrix. We apply a one-bit normalization to the covariance matrix eigenspectrum before extracting the cross-correlations in the time domain. The efficiency of the method is shown with several numerical tests. We apply the method to the data collected by the USArray, when the M8.8 Maule earthquake occurred on 2010 February 27. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable traveltime measurement even in the presence of the earthquake.
Pre-Processing Noise Cross-Correlations with Equalizing the Network Covariance Matrix Eigen-Spectrum
Seydoux, L.; de Rosny, J.; Shapiro, N.
2016-12-01
Theoretically, the extraction of Green functions from noise cross-correlation requires the ambient seismic wavefield to be generated by uncorrelated sources evenly distributed in the medium. Yet, this condition is often not verified. Strong events such as earthquakes often produce highly coherent transient signals. Also, the microseismic noise is generated at specific places on the Earth's surface with source regions often very localized in space. Different localized and persistent seismic sources may contaminate the cross-correlations of continuous records resulting in spurious arrivals or asymmetry and, finally, in biased travel-time measurements. Pre-processing techniques therefore must be applied to the seismic data in order to reduce the effect of noise anisotropy and the influence of strong localized events. Here we describe a pre-processing approach that uses the covariance matrix computed from signals recorded by a network of seismographs. We extend the widely used time and spectral equalization pre-processing to the equalization of the covariance matrix spectrum (i.e., its ordered eigenvalues). This approach can be considered as a spatial equalization. This method allows us to correct for the wavefield anisotropy in two ways: (1) the influence of strong directive sources is substantially attenuated, and (2) the weakly excited modes are reinforced, allowing to partially recover the conditions required for the Green's function retrieval. We also present an eigenvector-based spatial filter used to distinguish between surface and body waves. This last filter is used together with the equalization of the eigenvalue spectrum. We simulate two-dimensional wavefield in a heterogeneous medium with strongly dominating source. We show that our method greatly improves the travel-time measurements obtained from the inter-station cross-correlation functions. Also, we apply the developed method to the USArray data and pre-process the continuous records strongly influenced
Qian, Junhui; He, Zishu; Xie, Julan; Zhang, Yile
2017-12-01
In this paper, a procedure for the null broadening algorithm design with respect to the nonstationary interference is proposed. In contrast to previous works, we first impose nulls toward the regions of the nonstationary interference based on the reconstruction of the interference-plus-noise covariance matrix. Additionally, in order to provide a restriction on the shape of the beam pattern, a similarity constraint is enforced at the design stage. Then, the adaptive weight vector can be computed via maximizing a new signal-to-interference-plus-noise ratio (SINR) criterion subject to similarity constraint. Mathematically, the design original problem is expressed as a nonconvex fractional quadratically constrained quadratic programming (QCQP) problem with additional constraint, which can be converted into a convex optimisation problem by semidefinite programming (SDP) techniques. Finally, an optimal solution can be found by using the Charnes-Cooper transformation and the rank-one matrix decomposition theorem. Several numerical examples are performed to validate the performance of the proposed algorithm.
Energy Technology Data Exchange (ETDEWEB)
Ribes, Aurelien; Planton, Serge [CNRM-GAME, Meteo France-CNRS, Toulouse (France); Azais, Jean-Marc [Universite de Toulouse, UPS, IMT, LSP, Toulouse (France)
2009-10-15
The ''optimal fingerprint'' method, usually used for detection and attribution studies, requires to know, or, in practice, to estimate the covariance matrix of the internal climate variability. In this work, a new adaptation of the ''optimal fingerprints'' method is presented. The main goal is to allow the use of a covariance matrix estimate based on an observation dataset in which the number of years used for covariance estimation is close to the number of observed time series. Our adaptation is based on the use of a regularized estimate of the covariance matrix, that is well-conditioned, and asymptotically more precise, in the sense of the mean square error. This method is shown to be more powerful than the basic ''guess pattern fingerprint'', and than the classical use of a pseudo-inverted truncation of the empirical covariance matrix. The construction of the detection test is achieved by using a bootstrap technique particularly well-suited to estimate the internal climate variability in real world observations. In order to validate the efficiency of the detection algorithm with climate data, the methodology presented here is first applied with pseudo-observations derived from transient regional climate change scenarios covering the 1960-2099 period. It is then used to perform a formal detection study of climate change over France, analyzing homogenized observed temperature series from 1900 to 2006. In this case, the estimation of the covariance matrix is only based on a part of the observation dataset. This new approach allows the confirmation and extension of previous results regarding the detection of an anthropogenic climate change signal over the country. (orig.)
Ahmed, Sajid
2016-11-24
Various examples of methods and systems are provided for direct closed-form finite alphabet constant-envelope waveforms for planar array beampatterns. In one example, a method includes defining a waveform covariance matrix based at least in part upon a two-dimensional fast Fourier transform (2D-FFT) analysis of a frequency domain matrix Hf associated with a planar array of antennas. Symbols can be encoded based upon the waveform covariance matrix and the encoded symbols can be transmitted via the planar array of antennas. In another embodiment, a system comprises an N x M planar array of antennas and transmission circuitry configured to transmit symbols via a two-dimensional waveform beampattern defined based at least in part upon a 2D-FFT analysis of a frequency domain matrix Hf associated with the planar array of antennas.
Bouchoucha, Taha
2017-01-23
In multiple-input multiple-out (MIMO) radar, for desired transmit beampatterns, appropriate correlated waveforms are designed. To design such waveforms, conventional MIMO radar methods use two steps. In the first step, the waveforms covariance matrix, R, is synthesized to achieve the desired beampattern. While in the second step, to realize the synthesized covariance matrix, actual waveforms are designed. Most of the existing methods use iterative algorithms to solve these constrained optimization problems. The computational complexity of these algorithms is very high, which makes them difficult to use in practice. In this paper, to achieve the desired beampattern, a low complexity discrete-Fourier-transform based closed-form covariance matrix design technique is introduced for a MIMO radar. The designed covariance matrix is then exploited to derive a novel closed-form algorithm to directly design the finite-alphabet constant-envelope waveforms for the desired beampattern. The proposed technique can be used to design waveforms for large antenna array to change the beampattern in real time. It is also shown that the number of transmitted symbols from each antenna depends on the beampattern and is less than the total number of transmit antenna elements.
Ex post socio-economic assessment of the Oresund Bridge
DEFF Research Database (Denmark)
Knudsen, M.Aa.; Rich, Jeppe
2013-01-01
The paper presents an ex post socio-economic assessment of the Oresund Bridge conducted ten years after the opening in July 2000. The study applies historical micro data to re construct the travel pattern with no bridge in place and compare this to the current situation. To complete the socio......-economic assessment, the consumer benefits including all freight and passenger modes, are compared with the cost profile of the bridge. The monetary contributions are extrapolated to a complete 50 year period. It is revealed that the bridge from 2000–2010 generated a consumer surplus of €2 billion in 2000 prices...... discounted at 3.5% p.a., which should be compared with a total construction cost of approximately €4 billion. Seen over the 50 year period and by assuming a medium growth scenario the bridge is expected to generate an internal rate of return in the magnitude of 9% corresponding to a benefit-cost rate of 2...
The potential of more accurate InSAR covariance matrix estimation for land cover mapping
Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin
2017-04-01
Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.
Directory of Open Access Journals (Sweden)
Yasuhiro Nakamura
2012-07-01
Full Text Available The present study introduces the four-component scattering power decomposition (4-CSPD algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR data acquired by Phased Array L-band SAR (PALSAR on board of Advanced Land Observing Satellite (ALOS, an experimental proof is presented to show that both algorithms indeed produce identical results.
Gao, H; Wu, Y; Zhang, T; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R
2014-12-01
Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent 'super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
Yao, Hui; Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin
2015-01-01
Summary Multivariate meta-regression models are commonly used in settings where the response variable is naturally multi-dimensional. Such settings are common in cardiovascular and diabetes studies where the goal is to study cholesterol levels once a certain medication is given. In this setting, the natural multivariate endpoint is Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). In this paper, we examine study level (aggregate) multivariate meta-data from 26 Merck sponsored double-blind, randomized, active or placebo-controlled clinical trials on adult patients with primary hypercholesterolemia. Our goal is to develop a methodology for carrying out Bayesian inference for multivariate meta-regression models with study level data when the within-study sample covariance matrix S for the multivariate response data is partially observed. Specifically, the proposed methodology is based on postulating a multivariate random effects regression model with an unknown within-study covariance matrix Σ in which we treat the within-study sample correlations as missing data, the standard deviations of the within-study sample covariance matrix S are assumed observed, and given Σ, S follows a Wishart distribution. Thus, we treat the off-diagonal elements of S as missing data, and these missing elements are sampled from the appropriate full conditional distribution in a Markov chain Monte Carlo (MCMC) sampling scheme via a novel transformation based on partial correlations. We further propose several structures (models) for Σ, which allow for borrowing strength across different treatment arms and trials. The proposed methodology is assessed using simulated as well as real data, and the results are shown to be quite promising. PMID:26257452
Towards a covariance matrix of CAB model parameters for H(H2O
Directory of Open Access Journals (Sweden)
Scotta Juan Pablo
2017-01-01
Full Text Available Preliminary results on the uncertainties of hydrogen into light water thermal scattering law of the CAB model are presented. It was done through a coupling between the nuclear data code CONRAD and the molecular dynamic simulations code GROMACS. The Generalized Least Square method was used to adjust the model parameters on evaluated data and generate covariance matrices between the CAB model parameters.
Careau, Vincent; Wolak, Matthew E.; Carter, Patrick A.; Garland, Theodore
2015-01-01
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance–covariance matrix (G). Yet knowledge of G in a population experiencing new or altered selection is not sufficient to predict selection response because G itself evolves in ways that are poorly understood. We experimentally evaluated changes in G when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change. PMID:26582016
Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore
2015-11-22
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance-covariance matrix ( G: ). Yet knowledge of G: in a population experiencing new or altered selection is not sufficient to predict selection response because G: itself evolves in ways that are poorly understood. We experimentally evaluated changes in G: when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G: induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G: induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G: and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change. © 2015 The Author(s).
A Fast Algorithm for the Computation of HAC Covariance Matrix Estimators
Directory of Open Access Journals (Sweden)
Jochen Heberle
2017-01-01
Full Text Available This paper considers the algorithmic implementation of the heteroskedasticity and autocorrelation consistent (HAC estimation problem for covariance matrices of parameter estimators. We introduce a new algorithm, mainly based on the fast Fourier transform, and show via computer simulation that our algorithm is up to 20 times faster than well-established alternative algorithms. The cumulative effect is substantial if the HAC estimation problem has to be solved repeatedly. Moreover, the bandwidth parameter has no impact on this performance. We provide a general description of the new algorithm as well as code for a reference implementation in R.
Jones, Jeff A; Waller, Niels G
2015-06-01
Yuan and Chan (Psychometrika, 76, 670-690, 2011) recently showed how to compute the covariance matrix of standardized regression coefficients from covariances. In this paper, we describe a method for computing this covariance matrix from correlations. Next, we describe an asymptotic distribution-free (ADF; Browne in British Journal of Mathematical and Statistical Psychology, 37, 62-83, 1984) method for computing the covariance matrix of standardized regression coefficients. We show that the ADF method works well with nonnormal data in moderate-to-large samples using both simulated and real-data examples. R code (R Development Core Team, 2012) is available from the authors or through the Psychometrika online repository for supplementary materials.
Preconditioning of the background error covariance matrix in data assimilation for the Caspian Sea
Arcucci, Rossella; D'Amore, Luisa; Toumi, Ralf
2017-06-01
Data Assimilation (DA) is an uncertainty quantification technique used for improving numerical forecasted results by incorporating observed data into prediction models. As a crucial point into DA models is the ill conditioning of the covariance matrices involved, it is mandatory to introduce, in a DA software, preconditioning methods. Here we present first studies concerning the introduction of two different preconditioning methods in a DA software we are developing (we named S3DVAR) which implements a Scalable Three Dimensional Variational Data Assimilation model for assimilating sea surface temperature (SST) values collected into the Caspian Sea by using the Regional Ocean Modeling System (ROMS) with observations provided by the Group of High resolution sea surface temperature (GHRSST). We also present the algorithmic strategies we employ.
Ex-post Analysis of Mobile Telecom Mergers: The Case of Austria and The Netherlands
Aguzzoni, L. (Luca); Buehler, B. (Benno); Di Martile, L. (Luca); R.G.M. Kemp (Ron G.M.); Schwarz, A. (Anton)
2017-01-01
textabstractRecently there has been an increased attention towards the ex-post evaluation of competition policy enforcement decisions and in particular merger decisions. In this paper we study the effects of two mobile telecommunication mergers on prices. We apply a standard
Better Safe than Sorry? Ex Ante and Ex Post Moral Hazard in Dynamic Insurance Data
Abbring, J.H.; Chiappori, P.A.; Zavadil, T.
2008-01-01
This paper empirically analyzes moral hazard in car insurance using a dynamic theory of an insuree's dynamic risk (ex ante moral hazard) and claim (ex post moral hazard) choices and Dutch longitudinal micro data. We use the theory to characterize the heterogeneous dynamic changes in incentives to
Orbit covariance propagation via quadratic-order state transition matrix in curvilinear coordinates
Hernando-Ayuso, Javier; Bombardelli, Claudio
2017-09-01
In this paper, an analytical second-order state transition matrix (STM) for relative motion in curvilinear coordinates is presented and applied to the problem of orbit uncertainty propagation in nearly circular orbits (eccentricity smaller than 0.1). The matrix is obtained by linearization around a second-order analytical approximation of the relative motion recently proposed by one of the authors and can be seen as a second-order extension of the curvilinear Clohessy-Wiltshire (C-W) solution. The accuracy of the uncertainty propagation is assessed by comparison with numerical results based on Monte Carlo propagation of a high-fidelity model including geopotential and third-body perturbations. Results show that the proposed STM can greatly improve the accuracy of the predicted relative state: the average error is found to be at least one order of magnitude smaller compared to the curvilinear C-W solution. In addition, the effect of environmental perturbations on the uncertainty propagation is shown to be negligible up to several revolutions in the geostationary region and for a few revolutions in low Earth orbit in the worst case.
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
“It Was Raining All the Time!”: Ex Post Tourist Weather Perceptions
Directory of Open Access Journals (Sweden)
Stefan Gössling
2016-01-01
Full Text Available The importance of weather for tourism is now widely recognized. However, no research has so far addressed weather events from retrospective viewpoints, and, in particular, the role of “extreme” events in longer-term holiday memories. To better understand the character of ex post weather experiences and their importance in destination image perceptions and future travel planning behavior, this exploratory study addressed a sample of 50 tourists from three globally important source markets: Austria, Germany and Switzerland. Results indicate that weather events do not dominate long-term memories of tourist experiences. Yet, weather events are important in shaping destination image, with “rain” being the single most important weather variable negatively influencing perceptions. Results also suggest that weather events perceived as extreme can involve considerable emotions. The study of ex post traveler memories consequently makes a valuable contribution to the understanding of the complexity of “extreme weather” events for tourist demand responses.
Social preferences under risk: Minimizing collective risk vs. reducing ex-post inequality
Gaudeul, Alexia
2016-01-01
We refine the understanding of individual preferences across social lotteries, whereby the payoffs of a pair of subjects are exposed to random shocks. We find that aggregate behavior is ex-post and ex-ante inequality averse, but also that there is a wide variety of individual preferences and that the majority of subjects are indifferent to social concerns under risk. Furthermore, we determine whether subjects are averse to collective risk - the variability in the sum of payoffs of the pair. W...
Using causal maps to support ex-post assessment of social impacts of dams
Energy Technology Data Exchange (ETDEWEB)
Aledo, Antonio, E-mail: Antonio.Aledo@ua.es [Departamento de Sociología 1, Universidad de Alicante, Alicante 03080 (Spain); García-Andreu, Hugo, E-mail: Hugo.Andreu@ua.es [Departamento de Sociología 1, Universidad de Alicante, Alicante 03080 (Spain); Pinese, José, E-mail: pinese@uel.br [Centro de Ciências Exatas, UEL, Rodovia Celso Cid, Km 380, Campus Universitário, Londrina, PR 86057-970 (Brazil)
2015-11-15
- Highlights: • We defend the usefulness of causal maps (CM) for ex-post impact assessment of dams. • Political decisions are presented as unavoidable technical measures. • CM enable the identification of multiple causes involved in the dam impacts. • An alternative management of the dams is shown from the precise tracking of the causes. • Participatory CM better the quality of information and the governance of the research. This paper presents the results of an ex-post assessment of two important dams in Brazil. The study follows the principles of Social Impact Management, which offer a suitable framework for analyzing the complex social transformations triggered by hydroelectric dams. In the implementation of this approach, participative causal maps were used to identify the ex-post social impacts of the Porto Primavera and Rosana dams on the community of Porto Rico, located along the High Paraná River. We found that in the operation of dams there are intermediate causes of a political nature, stemming from decisions based on values and interests not determined by neutral, exclusively technical reasons; and this insight opens up an area of action for managing the negative impacts of dams.
Guo, Rongyan; Wang, Hongyan
2016-07-01
In this work, the issue of robust waveform optimization is addressed in the presence of clutter to improve the worst-case estimation accuracy for collocated multiple-input multiple-output (MIMO) radar. Robust design is necessary due to the fact that waveform design may be sensitive to uncertainties in the initial parameter estimates. Following the min-max approach, the robust waveform covariance matrix design is formulated here on the basis of Cramér-Rao Bound to ease this sensitivity systematically for improving the worst-case accuracy. To tackle the resultant complicated and nonlinear problem, a new diagonal loading (DL)-based iterative approach is developed, in which the inner optimization problem can first be decomposed to some independent subproblems by using the Hadamard's inequality, and then these subproblems can be reformulated into convex issues by using DL method, as well as the outer optimization problem can also be relaxed to a convex issue by translating the nonlinear function into a linear one, and, hence, both of them can be solved very effectively. An optimal solution to the original problem can be obtained via the least-squares fitting of the solution acquired by the iterative approach. Numerical simulations show the efficiency of the proposed method.
Pitchers, W R; Brooks, R; Jennions, M D; Tregenza, T; Dworkin, I; Hunt, J
2013-05-01
Phenotypic integration and plasticity are central to our understanding of how complex phenotypic traits evolve. Evolutionary change in complex quantitative traits can be predicted using the multivariate breeders' equation, but such predictions are only accurate if the matrices involved are stable over evolutionary time. Recent study, however, suggests that these matrices are temporally plastic, spatially variable and themselves evolvable. The data available on phenotypic variance-covariance matrix (P) stability are sparse, and largely focused on morphological traits. Here, we compared P for the structure of the complex sexual advertisement call of six divergent allopatric populations of the Australian black field cricket, Teleogryllus commodus. We measured a subset of calls from wild-caught crickets from each of the populations and then a second subset after rearing crickets under common-garden conditions for three generations. In a second experiment, crickets from each population were reared in the laboratory on high- and low-nutrient diets and their calls recorded. In both experiments, we estimated P for call traits and used multiple methods to compare them statistically (Flury hierarchy, geometric subspace comparisons and random skewers). Despite considerable variation in means and variances of individual call traits, the structure of P was largely conserved among populations, across generations and between our rearing diets. Our finding that P remains largely stable, among populations and between environmental conditions, suggests that selection has preserved the structure of call traits in order that they can function as an integrated unit. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.
Soubestre, Jean; Shapiro, Nikolai M.; Seydoux, Léonard; de Rosny, Julien; Droznin, Dimitry V.; Droznina, Svetlana Ya.; Senyukov, Sergey L.; Gordeev, Evgeny I.
2017-04-01
Volcanic tremors may be caused by magma moving through narrow fractures, by fragmentation and pulsation of pressurized fluids within the volcano, or by escape of pressurized steam and gases from fumaroles. They present an important attribute of the volcanic unrest and their detection and characterization is used in volcano monitoring systems. The tremors might be generated within different parts of volcanoes and might characterize different types of volcanic activity. The main goal of the present study is to develop a method of automatic classification of different types (sources) of tremors based on analysis of continuous records of a network of seismographs. The proposed method is based on the analysis of eigenvalues and eigenvectors of the seismic array covariance matrix. First, we followed an approach developed by Seydoux et al. (2016) and analyzed the width of the covariance matrix eigenvalues distribution to detect time periods with strong volcanic tremors. In a next step, we analyzed the frequency-dependent eigenvectors of the covariance matrix. The eigenvectors corresponding to strongest eigenvalues can be used as fingerprints of dominating seismic sources during the period over which the covariance matrix was calculated. We applied the method to the data recorded by the permanent seismic monitoring network composed of 19 stations operated in the vicinity of the Klyuchevskoy group of volcanoes (KVG) located in Kamchatka, Russia. The KVG is composed of 13 stratovolcanoes with 3 of them (Klyuchevskoy, Bezymianny, and Tolbachik) being very active during last decades. In addition, two other active volcanoes, Shiveluch and Kizimen, are located immediately north and south of KVG. This exceptional concentration of active volcanoes provides us with a multiplicity of seismic tremor sources required to validate the method. We used 4.5 years of vertical component records by 19 stations and computed network covariance matrices from day-long windows. We then analyzed
Ex post and ex ante willingness to pay (WTP) for the ICT Malaria Pf/Pv test kit in Myanmar.
Cho-Min-Naing; Lertmaharit, S; Kamol-Ratanakul, P; Saul, A J
2000-03-01
Willingness to pay (WTP) for the ICT Malaria Pf/Pv test kit was assessed by the contingent valuation method using a bidding game approach in two villages in Myanmar. Kankone (KK) village has a rural health center (RHC) and Yae-Aye-Sann (YAS) is serviced by community health worker (CHW). The objectives were to assess WTP for the ICT Malaria Pf/Pv test kit and to determine factors affecting the WTP. In both villages WTP was assessed in two different conditions, ex post and ex ante. The ex post WTP was assessed at an RHC in the KK village and at the residence of a CHW in the YAS village on patients immediately following diagnosis of malaria. The ex ante WTP was assessed by household interviews in both villages on people with a prior history of malaria. Ordinary least squares (OLS) multiple regression analysis was used to analyze factors affecting WTP. The WTP was higher in ex post conditions than ex ante in both villages. WTP was significantly positively associated with the average monthly income of the respondents and severity of illness in both ex post and ex ante conditions (p < 0.001). Distance between the residence of the respondents and the health center was significantly positively associated (p < 0.05) in the ex ante condition in a household survey of YAS village. Traveling time to RHC had a negative relationship with WTP (p < 0.05) in the ex post condition in the RHC survey in KK village.
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Directory of Open Access Journals (Sweden)
Leif E. Peterson
1997-11-01
Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.
Beetsma, R.; Bluhm, B.; Giuliodori, M.; Wierts, P.
2013-01-01
This paper splits the ex post error in the budget balance, defined as the final budget figure minus the planned figure, into implementation and revision errors, and investigates the determinants of these errors. The implementation error is the difference between the nowcast, published toward the end
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, P.R.; Lunde, Asger
2008-01-01
This paper shows how to use realized kernels to carry out efficient feasible inference on the ex post variation of underlying equity prices in the presence of simple models of market frictions. The weights can be chosen to achieve the best possible rate of convergence and to have an asymptotic...
From first-release to ex post fiscal data: exploring the sources of revision errors in the EU
Beetsma, R.; Bluhm, B.; Giuliodori, M.; Wierts, P.
2012-01-01
This paper explores the determinants of deviations of ex post budget outcomes from firstrelease outcomes published towards the end of the year of budget implementation. The predictive content of the first-release outcomes is important, because these figures are an input for the next budget and the
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
Overcoming Ex-Post Development Stagnation: Interventions with Continuity and Scaling in Mind
Directory of Open Access Journals (Sweden)
Bradley T. Hiller
2016-02-01
Full Text Available Project interventions are important vehicles for development globally. However, while there is often allocation of resources for new and innovative (pilot projects—with varying levels of success—there is seemingly less focus on consolidating and/or scaling the positive impacts of successful larger interventions. Assuming an overarching development goal to have long lasting impact at scale, this approach seems somewhat contradictory. Scaling is often not integrated into project planning, design and implementation and rarely pursued genuinely in the ex-post. However, where demand for further development remains outstanding beyond project completion, opportunities may exist to build upon project platforms and extend benefits in a cost effective manner. This paper examines existing scaling typologies, before introducing “scaling-within” as a concept to promote greater continuity of development to a wider range of stakeholders. Scaling-within offers the opportunity to “in-fill” intervention principles and practices to both project and non-project communities within a broader strategic framework to address disparities and to promote sustainable development. The authors draw on research from case studies of large-scale integrated watershed rehabilitation projects and assess scaling-within against a contemporary scaling framework drawn from the literature. While the concept is tested with watersheds as the administrative unit, the authors anticipate applications for other project management units.
Analysis of international content of ranked nursing journals in 2005 using ex post facto design.
Dougherty, Molly C; Lin, Shu-Yuan; McKenna, Hugh P; Seers, Kate; Keeney, Sinead
2011-06-01
The purpose of this study was to examine articles in ISI-ranked nursing journals and to analyse the articles and journals, using definitions of international and article content. Growing emphasis on global health includes attention on international nursing literature. Contributions from Latin America and Africa have been reported. Attention to ranked nursing journals to support scholarship in global health is needed. Using an ex post facto design, characteristics of 2827 articles, authors and journals of 32 ranked nursing journals for the year 2005 were analysed between June 2006 and June 2007. Using definitions of international and of article content, research questions were analysed statistically. (a) 928 (32·8%) articles were international; (b) 2016 (71·3%) articles were empirical or scholarly; (c) 826 (89·3%) articles reflecting international content were scholarly or empirical; (d) among international articles more were empirical (66·3 % vs. 32·8 %; χ(2) ((1)) = 283·6, P journals were led by an international editorial team; and (g) international journals had more international articles (3·6 % vs. 29·2 %; χ(2) ((1)) = 175·75, P journals (t = -14·43, P journals. Results indicate the need to examine the international relevance of the nursing literature. © 2011 Blackwell Publishing Ltd.
Directory of Open Access Journals (Sweden)
Cathy Suykens
2016-12-01
Full Text Available There is a wealth of literature on the design of ex post compensation mechanisms for natural disasters. However, more research needs to be done on the manner in which these mechanisms could steer citizens toward adopting individual-level preventive and protection measures in the face of flood risks. We have provided a comparative legal analysis of the financial compensation mechanisms following floods, be it through insurance, public funds, or a combination of both, with an empirical focus on Belgium, the Netherlands, England, and France. Similarities and differences between the methods in which these compensation mechanisms for flood damages enhance resilience were analyzed. The comparative analysis especially focused on the link between the recovery strategy on the one hand and prevention and mitigation strategies on the other. There is great potential within the recovery strategy for promoting preventive action, for example in terms of discouraging citizens from living in high-risk areas, or encouraging the uptake of mitigation measures, such as adaptive building. However, this large potential has yet to be realized, in part because of insufficient consideration and promotion of these connections within existing legal frameworks. We have made recommendations about how the linkages between strategies can be further improved. These recommendations relate to, among others, the promotion of resilient reinstatement through recovery mechanisms and the removal of legal barriers preventing the establishment of link-inducing measures.
DEFF Research Database (Denmark)
Kinnebrock, Silja; Podolskij, Mark
This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis ...
DEFF Research Database (Denmark)
Cherchi, Elisabetta; Guevara, Cristian Angelo
2012-01-01
of parameters increases is usually known as the “curse of dimensionality” in the simulation methods. We investigate this problem in the case of the random coefficients Logit model. We compare the traditional Maximum Simulated Likelihood (MSL) method with two alternative estimation methods: the Expectation......–covariance matrix. Results show that indeed MSL suffers from lack of empirical identification as the dimensionality grows while EM deals much better with this estimation problem. On the other hand, the HH method, although not being simulation-based, showed poor performance with large dimensions, principally because......When the dimension of the vector of estimated parameters increases, simulation based methods become impractical, because the number of draws required for estimation grows exponentially with the number of parameters. In simulation methods, the lack of empirical identification when the number...
Kunieda, Satoshi
2017-09-01
We report the status of the R-matrix code AMUR toward consistent cross-section evaluation and covariance analysis for the light-mass nuclei. The applicable limit of the code is extended by including computational capability for the charged-particle elastic scattering cross-sections and the neutron capture cross-sections as example results are shown in the main texts. A simultaneous analysis is performed on the 17O compound system including the 16O(n,tot) and 13C(α,n)16O reactions together with the 16O(n,n) and 13C(α,α) scattering cross-sections. It is found that a large theoretical background is required for each reaction process to obtain a simultaneous fit with all the experimental cross-sections we analyzed. Also, the hard-sphere radii should be assumed to be different from the channel radii. Although these are technical approaches, we could learn roles and sources of the theoretical background in the standard R-matrix.
Pigni, Marco T.; Gauld, Ian C.; Croft, Stephen
2017-09-01
The SAMMY code system is mainly used in nuclear data evaluations for incident neutrons in the resolved resonance region (RRR), however, built-in capabilities also allow the code to describe the resonance structure produced by other incident particles, including charged particles. (α,n) data provide fundamental information that underpins nuclear modeling and simulation software, such as ORIGEN and SOURCES4C, used for the analysis of neutron emission and definition of source emission processes. The goal of this work is to carry out evaluations of charged-particle-induced reaction cross sections in the RRR. The SAMMY code was recently used in this regard to generate a Reich-Moore parameterization of the available 17,18O(α,n) experimental cross sections in order to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types. This paper provides a brief description of the SAMMY evaluation procedure for the treatment of 17,18O(α,n) reaction cross sections. The results are used to generate neutron source rates for a plutonium oxide matrix.
Ex ante and ex post control of the public interest in public-private partnership agreements
Directory of Open Access Journals (Sweden)
Ćirić Aleksandar
2016-01-01
implementing the agreement as well as an effective control of such implementation (ex post methodological aspect. PPP agreements should provide a mechanism for adjusting their contents to changed circumstances, i.e. the social, legal and economic context which pervades the preparation, implementation and realization of the specific PPP project. Among other factors, this flexibility rests on mutual trust and cooperation of the contracting parties. Ultimately, in the context of control over exercising the public interest, the methodological approach of the PPP agreement essentially lies in preventing the public partner to succumb to the temptation of adopting the simplest available solution. Instead, it is necessary to clearly define the expectations which the public body as a title-holder of public interest has in regard of specific PPP projects, and to limit the responsibility of the public actor. The success of a PPP agreement in each particular case depends on the extent to which the agreement provides for adequate treatment of these presumptions.
Ex-ante and ex-post measurement of equality of opportunity in health: a normative decomposition.
Donni, Paolo Li; Peragine, Vito; Pignataro, Giuseppe
2014-02-01
This paper proposes and discusses two different approaches to the definition of inequality in health: the ex-ante and the ex-post approach. It proposes strategies for measuring inequality of opportunity in health based on the path-independent Atkinson equality index. The proposed methodology is illustrated using data from the British Household Panel Survey; the results suggest that in the period 2000-2005, at least one-third of the observed health equalities in the UK were equalities of opportunity. Copyright © 2013 John Wiley & Sons, Ltd.
Parameter inference with estimated covariance matrices
Sellentin, Elena; Heavens, Alan F.
2016-02-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
Covariance Applications with Kiwi
Mattoon, C. M.; Brown, D.; Elliott, J. B.
2012-05-01
The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Covariance Applications with Kiwi
Directory of Open Access Journals (Sweden)
Elliott J.B.
2012-05-01
Full Text Available The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL is developing a new tool, named ‘Kiwi’, that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL and large-scale Uncertainty Quantification (UQ studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Directory of Open Access Journals (Sweden)
Javier Andrés Gómez-Díaz
2016-04-01
Full Text Available A review of making purchase decisions through internet was retrospectively reviewed (ex-post-fact with a sample of 340 people who had (n=187 and who had not purchased online (n=153. The questionnaire that was used includes statement for each of the stages involved in the choice (problem identification, information search, alternatives evaluation, and purchase behavior. Some scales were designed while some others were adapted from the available research literature. Results shows that, through internet, it is more common to perform unplanned purchase, and the information available on the network usually has a significant value in online decision-making. Online purchasers and not purchasers differ on risk perception. Some recommendations to design web pages for commercial use are suggested, and discussion about the evolution of online shopping in Colombia is presented.
Explorando la relación entre políticas crediticias y resultados de la banca española ex-post.
Directory of Open Access Journals (Sweden)
Francisco Jaime Ibáñez Hernández
2008-05-01
Full Text Available This paper analyzes the relationship between bank credit policies and their ex-post performance. Literature review reveals that, sometimes, credit markets can be affected by a stronger endogenous component that usually assumed. We propose that the growth speed of the bank credit portfolio in expansive cycles is related to its ex-post performance once the recession cycle begins. The analysis outcomes refl ect a strong relation between the speed in the expansion of the credit and a poorer behaviour of the benefi ts, returns and insolvencies.
Beckmann, V.; Soregaroli, C.; Wesseler, J.H.H.
2006-01-01
The future institutional environment for the co-existence of genetically modified (GM) crops, conventional crops and organic crops in Europe combines measures of ex-ante regulation and ex-post liability rules. Against this background we ask the following two questions: How does ex-ante regulation
Westhoek, H.; Berg, van der R.; Hoop, de D.W.; Kamp, van der A.
2004-01-01
This paper summarises the results of both an ex-post evaluation of the Dutch Mineral Accounting System (MINAS) and an ex-ante evaluation of the effect of different levy-free surplus values. The MINAS system has been introduced in 1998 in order to reduce nitrate and phosphate leaching from
Directory of Open Access Journals (Sweden)
Amir F. N. Abdul-Manan
2015-03-01
Full Text Available Ex-post evaluations of energy policies in Malaysia between 1970 and 2010 were conducted. The developments of energy policies in Malaysia were traced from the early 1970s with the introduction of the country’s first energy-related policy all the way to 2010 with the country’s first endeavour towards a biobased energy system. Analyses revealed that many of the policies were either: (1 directly responding to changes in global/domestic socioeconomic and political events, or (2 provided visions to guide developments of the energy sector in alignment with the country’s growth agenda. Critical examinations of the country’s actual energy consumptions during these 40 years were also conducted to evaluate the efficacy of these energy-related policies. Three noteworthy successes in Malaysia’s energy landscape are: (1 the formation of PETRONAS as the national oil and gas company; (2 reduction of country’s over-reliance on oil as a single source of energy by significantly growing the production and use of natural gas in a short span of time; and (3 creation of a thriving oil and gas value chain and ecosystem in the country. However, the country is still critically dependent on scarce petroleum resources, despite having an abundance of renewable reserves. Progress towards renewable energy has been too little and too slow.
Aydin, Alev Dilek; Caliskan Cavdar, Seyma
2015-01-01
The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs) by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST) 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR) method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method. PMID:26550010
Directory of Open Access Journals (Sweden)
MK Hasan
2014-06-01
Full Text Available The study estimated the benefit and rates of returns to investment on turmeric research and development in Bangladesh. The Economic Surplus Model with ex-post analysis was used to determine the returns to investment and their distribution between the production and consumption. Several discounting techniques were also used to assess the efficiency of turmeric research. The adoption rate was found increasing trend over the period. The yield of BARI developed modern varieties of turmeric was 41 to 73% higher than those of the local variety. Society got net benefit Tk. 9333.88 million from the investment of turmeric research and extension. The net present value (NPV and present value of research cost (PVRC were estimated at Tk. 1200.84 and 157.88, respectively. The internal rate of return (IRR and benefit cost ratio (BCR were estimated to be 68% and 10.45, respectively indicated investment on turmeric research and development was a good and profitable investment. Seed production programme of turmeric should be taken largely to increase production by increasing area adoption.
Directory of Open Access Journals (Sweden)
Frieder Kleefeld
2013-01-01
Full Text Available According to some generalized correspondence principle the classical limit of a non-Hermitian quantum theory describing quantum degrees of freedom is expected to be the well known classical mechanics of classical degrees of freedom in the complex phase space, i.e., some phase space spanned by complex-valued space and momentum coordinates. As special relativity was developed by Einstein merely for real-valued space-time and four-momentum, we will try to understand how special relativity and covariance can be extended to complex-valued space-time and four-momentum. Our considerations will lead us not only to some unconventional derivation of Lorentz transformations for complex-valued velocities, but also to the non-Hermitian Klein-Gordon and Dirac equations, which are to lay the foundations of a non-Hermitian quantum theory.
Data Selection for Within-Class Covariance Estimation
2016-09-08
covariance matrix training data collection in real world applications. Index Terms— channel compensation, i-vectors, within-class covariance...normalized to have zero mean and unit variance. Finally, the utterance feature vectors were converted to i-vectors using a 2048-order Universal...Background Model (UBM) and a rank-600 total variability (T) matrix . The estimated within-class covariance matrix was computed via [1] 1 1 1 1
Deriving covariant holographic entanglement
Energy Technology Data Exchange (ETDEWEB)
Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)
2016-11-07
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Directory of Open Access Journals (Sweden)
Xiangdong Chen
2017-09-01
Full Text Available The Second Board Market is typical stock market for high tech companies in China. This paper discusses the relationship between trading volume and price changes in the case of high-tech listed companies in the Chinese Second-Board Stock Market. By using the basic concepts proposed by Kim and Verrecchia, and Kandel and Pearson, and contrasting them with ex-post information from earnings releases, the paper provides findings on the speculative behavior of informed traders with a volume shock premium. The paper suggests that these methods may be further applied to investigating investors’ behavior in speculation, especially for the high-tech-company-based Second-Board Stock Market during announcement periods.
Covariant quantum Markovian evolutions
Holevo, A. S.
1996-04-01
Quantum Markovian master equations with generally unbounded generators, having physically relevant symmetries, such as Weyl, Galilean or boost covariance, are characterized. It is proven in particular that a fully Galilean covariant zero spin Markovian evolution reduces to the free motion perturbed by a covariant stochastic process with independent stationary increments in the classical phase space. A general form of the boost covariant Markovian master equation is discussed and a formal dilation to the Langevin equation driven by quantum Boson noises is described.
Carreño, M.L.
2006-01-01
The objectives of this thesis are: the ex ante seismic risk evaluation for urban centers, the disaster risk management evaluation and the ex post risk evaluation of the damaged buildings after an earthquake. A complete review of the basic concepts and of the most important recent works performed in these fields. These aspects are basic for the development of the new ex ante and ex post seismic risk evaluation approaches which are proposed in this thesis and for the s evaluation of the effecti...
A covariance NMR toolbox for MATLAB and OCTAVE.
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. Copyright © 2010 Elsevier Inc. All rights reserved.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... measures and longitudinal structures, and the third involves a spatiotemporal analysis of rainfall data. The models take non-normality into account in the conventional way by means of a variance function, and the mean structure is modelled by means of a link function and a linear predictor. The models...
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Shibata, Keiichi
1997-09-01
A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of {sup 238}U reaction cross sections were calculated with this system. (author)
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
Covariant perturbation theory and chiral superpropagators
Ecker, G
1972-01-01
The authors use a covariant formulation of perturbation theory for the non-linear chiral invariant pion model to define chiral superpropagators leading to S-matrix elements which are independent of the choice of the pion field coordinates. The relation to the standard definition of chiral superpropagators is discussed. (11 refs).
Imposed quasi-normality in covariance structure analysis
Koning, Ruud H.; Neudecker, H.; Wansbeek, T.
1993-01-01
In the analysis of covariance structures, the distance between an observed covariance matrix S of order k x k and C(6) E(S) is minimized by searching over the 8-space. The criterion leading to a best asymptotically normal (BAN) estimator of 0 is found by minimizing the difference between vecS and
A three domain covariance framework for EEG/MEG data
Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.
2015-01-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
Energy Technology Data Exchange (ETDEWEB)
Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G
2009-09-28
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
Forecasting Covariance Matrices: A Mixed Frequency Approach
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information....
Covariate analysis of bivariate survival data
Energy Technology Data Exchange (ETDEWEB)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Covariant approximation averaging
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Covariant Magnetic Connection Hypersurfaces
Pegoraro, F
2016-01-01
In the single fluid, nonrelativistic, ideal-Magnetohydrodynamic (MHD) plasma description magnetic field lines play a fundamental role by defining dynamically preserved "magnetic connections" between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D {\\it magnetic connection hypersurfaces} in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when ${\\bf E} \\cdot {\\bf B} = 0$.
Directory of Open Access Journals (Sweden)
M. Elvira Mendez-Pinedo
2016-10-01
Full Text Available Indexation of credit to inflation (ex-post is a unique legal practice in Iceland based on valorism theory on money vs. nominalism. Two rulings issued in 2014 by the EFTA Court try to clarify the legality and fairness of this particular price-variation clause under the European Economic Area consumer credit acquis. The study summarizes the rulings and analyses critically the interpretation provided by the court. It argues that the judgements defy the logic of non-contradiction since indexation of credit proves to be an impossible oxymoron under EU/EEA law. The results are confusing. On one hand, cost of credit and usury practices tend to fall outside the scope of European harmonisation (provided disclosure obligation of cost of credit and transparency ex-ante are respected. A fairness control is thus dependent on national and case circumstances to be assessed by domestic courts. On the other hand, European rules also impose with no derogations that the cost of indexation of credit to inflation is disclosed in a transparent way and calculated ex-ante. The paradox is there. Since indexation of credit operates ex-post on the basis of real inflation, it is impossible to disclose ex-ante in a transparent way. The findings of the study help to understand the situation of impasse in Iceland. Without a clear interpretation from the EFTA Court, the saga has continued at national level and will probably head for a second round of assessment at European level.
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Energy Technology Data Exchange (ETDEWEB)
Broc, J.S
2006-12-15
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
Covariant holography of a tachyonic accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
General Galilei Covariant Gaussian Maps
Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo
2017-09-01
We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].
Directory of Open Access Journals (Sweden)
FAİK BİLGİLİ
2013-06-01
Full Text Available The aim of this study is to compare the accuracies in ex post forecast of the VAR, ARIMA, ES, Combining and Add-factor methods. In this comparison, the ex post forecasts of 2000:1-2000:4 are obtained by using the data of the Turkish private consumption for the period of 1987:1-1999:4. Beside private consumption, for the VAR method, the Turkish GDP data is employed for the same periods. Later, the seasonality and stationarity analyses are run for these two series. The series are seasonally adjusted by the additive decomposition method and found as I(1. In the following steps, the ex post forecast models of these methods are established. Forecast outputs are evaluated by the criteria of MAE, MAPE, MSE, RMSE and Theil U. In conclusion of this analysis, the combining model of VAR-ES is found the best among others.
Bayesian source term determination with unknown covariance of measurements
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
A three domain covariance framework for EEG/MEG data.
Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C
2015-10-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.
A simple procedure for the comparison of covariance matrices
2012-01-01
Background Comparing the covariation patterns of populations or species is a basic step in the evolutionary analysis of quantitative traits. Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation. Results I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences. Conclusions The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure. PMID:23171139
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals......This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
2017-01-01
dimension of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals......This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a $p$-dimensional heavy-tailed time series when $p$ converges to infinity together with the sample size $n$. We generalize the growth rates of $p$ existing in the literature. Assuming a regular variation condition with tail index $\\alpha
Sparse reduced-rank regression with covariance estimation
Chen, Lisha
2014-12-08
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Modeling the Conditional Covariance between Stock and Bond Returns
P. de Goeij (Peter); W.A. Marquering (Wessel)
2002-01-01
textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for
Detection of fungal damaged popcorn using image property covariance features
Covariance-matrix-based features were applied to the detection of popcorn infected by a fungus that cause a symptom called “blue-eye.” This infection of popcorn kernels causes economic losses because of their poor appearance and the frequently disagreeable flavor of the popped kernels. Images of ker...
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Covariance Models for Hydrological Applications
Hristopulos, Dionissios
2014-05-01
This methodological contribution aims to present some new covariance models with applications in the stochastic analysis of hydrological processes. More specifically, we present explicit expressions for radially symmetric, non-differentiable, Spartan covariance functions in one, two, and three dimensions. The Spartan covariance parameters include a characteristic length, an amplitude coefficient, and a rigidity coefficient which determines the shape of the covariance function. Different expressions are obtained depending on the value of the rigidity coefficient and the dimensionality. If the value of the rigidity coefficient is much larger than one, the Spartan covariance function exhibits multiscaling. Spartan covariance models are more flexible than the classical geostatatistical models (e.g., spherical, exponential). Their non-differentiability makes them suitable for modelling the properties of geological media. We also present a family of radially symmetric, infinitely differentiable Bessel-Lommel covariance functions which are valid in any dimension. These models involve combinations of Bessel and Lommel functions. They provide a generalization of the J-Bessel covariance function, and they can be used to model smooth processes with an oscillatory decay of correlations. We discuss the dependence of the integral range of the Spartan and Bessel-Lommel covariance functions on the parameters. We point out that the dependence is not uniquely specified by the characteristic length, unlike the classical geostatistical models. Finally, we define and discuss the use of the generalized spectrum for characterizing different correlation length scales; the spectrum is defined in terms of an exponent α. We show that the spectrum values obtained for exponent values less than one can be used to discriminate between mean-square continuous but non-differentiable random fields. References [1] D. T. Hristopulos and S. Elogne, 2007. Analytic properties and covariance functions of
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Covariance estimation in Terms of Stokes Parameters iwth Application to Vector Sensor Imaging
2016-12-15
vector sensor imaging problem: estimating the magnitude, polarization, and direction of plane wave sources from a sample covariance matrix of vector mea...Covariance estimation in terms of Stokes parameters with application to vector sensor imaging Ryan Volz∗, Mary Knapp†, Frank D. Lind∗, Frank C. Robey...Lincoln Laboratory, Lexington, MA Abstract— Vector sensor imaging presents a challeng- ing problem in covariance estimation when allowing arbitrarily
Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes
DEFF Research Database (Denmark)
Zimmermann, Ralf
2010-01-01
The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood...... of spatial Gaussian predictor models as a function of its hyperparameters is investigated theoretically. Asymptotic sandwich bounds for the maximum likelihood function in terms of the condition number of the associated covariance matrix are established. As a consequence, the main result is obtained...
Dummy covariates in CUB models
Directory of Open Access Journals (Sweden)
Maria Iannario
2013-05-01
Full Text Available In this paper we discuss the use of dummy variables as sensible covariates in a class of statistical models which aim at explaining the subjects’ preferences with respect to several items. After a brief introduction to CUB models, the work considers statistical interpretations of dummy covariates. Then, a simulation study is performed to evaluate the power discrimination of an asymptotic test among sub-populations. Some empirical evidences and concluding remarks end the paper.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
Disintegrating the fly: A mutational perspective on phenotypic integration and covariation.
Haber, Annat; Dworkin, Ian
2017-01-01
The structure of environmentally induced phenotypic covariation can influence the effective strength and magnitude of natural selection. Yet our understanding of the factors that contribute to and influence the evolutionary lability of such covariation is poor. Most studies have either examined environmental variation without accounting for covariation, or examined phenotypic and genetic covariation without distinguishing the environmental component. In this study, we examined the effect of mutational perturbations on different properties of environmental covariation, as well as mean shape. We use strains of Drosophila melanogaster bearing well-characterized mutations known to influence wing shape, as well as naturally derived strains, all reared under carefully controlled conditions and with the same genetic background. We find that mean shape changes more freely than the covariance structure, and that different properties of the covariance matrix change independently from each other. The perturbations affect matrix orientation more than they affect matrix eccentricity or total variance. Yet, mutational effects on matrix orientation do not cluster according to the developmental pathway that they target. These results suggest that it might be useful to consider a more general concept of "decanalization," involving all aspects of variation and covariation. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Visualization and assessment of spatio-temporal covariance properties
Huang, Huang
2017-11-23
Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.
Eigenvalues of Covariance Matrix for Two-Source Array Processing
1990-11-01
the need to perform eigen-decompositions. The works by Munier and Delisle [33], Reilly, Chen, and Wong [43], and Friedlander [19] all appeared in 1988...Architectures for Signal Processing III, SPIE vol. 975, Aug 1988, pp. 1 0 1 -1 0 7 . [33] Munier , J. G.Y. Delisle. "A New Algorithm for the
Ocean Spectral Data Assimilation Without Background Error Covariance Matrix
2016-01-01
533 Chu PC, Wang GH, Chen YC (2002) Japan/East Sea (JES) circulation and thermohaline 534 structure, Part 3, Autocorrelation Functions. J Phys...Oceanogr, 32, 3596-3615. 535 536 Chu PC, Wang GH (2003) Seasonal variability of thermohaline front in the central South China 537 Sea. J Oceanogr, 59
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Matrix algebra for higher order moments
Meijer, Erik
2005-01-01
A large part of statistics is devoted to the estimation of models from the sample covariance matrix. The development of the statistical theory and estimators has been greatly facilitated by the introduction of special matrices, such as the commutation matrix and the duplication matrix, and the
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
2017-01-01
This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... that the extreme eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high...... dimension of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...... of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2014-01-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2013-11-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
Covariant Description of Isothermic Surfaces
Tafel, J.
2016-12-01
We present a covariant formulation of the Gauss-Weingarten equations and the Gauss-Mainardi-Codazzi equations for surfaces in 3-dimensional curved spaces. We derive a coordinate invariant condition on the first and second fundamental form which is locally necessary and sufficient for the surface to be isothermic. We show how to construct isothermic coordinates.
MIMO-radar Waveform Covariance Matrices for High SINR and Low Side-lobe Levels
Ahmed, Sajid
2012-12-29
MIMO-radar has better parametric identifiability but compared to phased-array radar it shows loss in signal-to-noise ratio due to non-coherent processing. To exploit the benefits of both MIMO-radar and phased-array two transmit covariance matrices are found. Both of the covariance matrices yield gain in signal-to-interference-plus-noise ratio (SINR) compared to MIMO-radar and have lower side-lobe levels (SLL)\\'s compared to phased-array and MIMO-radar. Moreover, in contrast to recently introduced phased-MIMO scheme, where each antenna transmit different power, our proposed schemes allows same power transmission from each antenna. The SLL\\'s of the proposed first covariance matrix are higher than the phased-MIMO scheme while the SLL\\'s of the second proposed covariance matrix are lower than the phased-MIMO scheme. The first covariance matrix is generated using an auto-regressive process, which allow us to change the SINR and side lobe levels by changing the auto-regressive parameter, while to generate the second covariance matrix the values of sine function between 0 and $\\\\pi$ with the step size of $\\\\pi/n_T$ are used to form a positive-semidefinite Toeplitiz matrix, where $n_T$ is the number of transmit antennas. Simulation results validate our analytical results.
AFCI-2.0 Neutron Cross Section Covariance Library
Energy Technology Data Exchange (ETDEWEB)
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
Emergent gravity on covariant quantum spaces in the IKKT model
Energy Technology Data Exchange (ETDEWEB)
Steinacker, Harold C. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Vienna (Austria)
2016-12-30
We study perturbations of 4-dimensional fuzzy spheres as backgrounds in the IKKT or IIB matrix model. Gauge fields and metric fluctuations are identified among the excitation modes with lowest spin, supplemented by a tower of higher-spin fields. They arise from an internal structure which can be viewed as a twisted bundle over S{sup 4}, leading to a covariant noncommutative geometry. The linearized 4-dimensional Einstein equations are obtained from the classical matrix model action under certain conditions, modified by an IR cutoff. Some one-loop contributions to the effective action are computed using the formalism of string states.
Frame covariant nonminimal multifield inflation
Karamitsos, Sotirios; Pilaftsis, Apostolos
2018-02-01
We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky-De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.
Frame covariant nonminimal multifield inflation
Directory of Open Access Journals (Sweden)
Sotirios Karamitsos
2018-02-01
Full Text Available We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky–De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S.
2017-05-01
We exploit the 1 + 1 + 2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.
Noisy covariance matrices and portfolio optimization II
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the
Castrillon, Julio
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
A note on covariant dynamical semigroups
Holevo, A. S.
1993-04-01
It is shown that in the standard representation of the generator of a norm continuous dynamical semigroup, which is covariant with respect to a unitary representation of an amenable group, the completely positive part can always be chosen covariant and the Hamiltonian commuting with the representation. The structure of the generator of a translation covariant dynamical semigroup is described.
Covariant gauges at finite temperature
Landshoff, P V; Rebhan, A
1992-01-01
A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler...
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1988-01-01
The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs.
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei
2017-11-08
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.
Covariance Evaluation Methodology for Neutron Cross Sections
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Covariant constraints for generic massive gravity and analysis of its characteristics
DEFF Research Database (Denmark)
Deser, S.; Sandora, M.; Waldron, A.
2014-01-01
We perform a covariant constraint analysis of massive gravity valid for its entire parameter space, demonstrating that the model generically propagates 5 degrees of freedom; this is also verified by a new and streamlined Hamiltonian description. The constraint's covariant expression permits compu...... computation of the model's caustics. Although new features such as the dynamical Riemann tensor appear in the characteristic matrix, the model still exhibits the pathologies uncovered in earlier work: superluminality and likely acausalities....
General Covariance from the Quantum Renormalization Group
Shyam, Vasudev
2016-01-01
The Quantum renormalization group (QRG) is a realisation of holography through a coarse graining prescription that maps the beta functions of a quantum field theory thought to live on the `boundary' of some space to holographic actions in the `bulk' of this space. A consistency condition will be proposed that translates into general covariance of the gravitational theory in the $D + 1$ dimensional bulk. This emerges from the application of the QRG on a planar matrix field theory living on the $D$ dimensional boundary. This will be a particular form of the Wess--Zumino consistency condition that the generating functional of the boundary theory needs to satisfy. In the bulk, this condition forces the Poisson bracket algebra of the scalar and vector constraints of the dual gravitational theory to close in a very specific manner, namely, the manner in which the corresponding constraints of general relativity do. A number of features of the gravitational theory will be fixed as a consequence of this form of the Po...
Covariance specification and estimation to improve top-down Green House Gas emission estimates
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve
Directory of Open Access Journals (Sweden)
Tania Dehesh
2015-01-01
Full Text Available Background. Univariate meta-analysis (UM procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC, common correlation (CC, estimated correlation (EC, and multivariate multilevel correlation (MMC on the estimation bias, mean square error (MSE, and 95% probability coverage of the confidence interval (CI in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Directory of Open Access Journals (Sweden)
Y. Zhu
2017-04-01
Full Text Available High Frequency (HF radio waves propagating in the ionospheric random inhomogeneous media exhibit a spatial nonlinearity wavefront which may limit the performance of conventional high-resolution methods for HF sky wave radar systems. In this paper, the spatial correlation function of wavefront is theoretically derived on condition that the radio waves propagate through the ionospheric structure containing irregularities. With this function, the influence of wavefront distortions on the array covariance matrix can be quantitatively described with the spatial coherence matrix, which is characterized with the coherence loss parameter. Therefore, the problem of wavefront correction is recast as the determination of coherence loss parameter and this is solved by the covariance matching (CM technique. The effectiveness of the proposed method is evaluated both by the simulated and real radar data. It is shown numerically that an improved direction of arrival (DOA estimation performance can be achieved with the corrected array covariance matrix.
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Fast covariance estimation for innovations computed from a spatial Gibbs point process
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Rubak, Ege
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo......-likelihood estimate of the parameters of a spatial Gibbs point process model. This allows us to construct asymptotic confidence intervals for the parameters. We illustrate the efficiency of our procedure in a simulation study for several classical parametric models. The procedure is implemented in the statistical...
Multigroup covariance matrices for fast-reactor studies
Energy Technology Data Exchange (ETDEWEB)
Smith, J.D. III; Broadhead, B.L.
1981-04-01
This report presents the multigroup covariance matrices based on the ENDF/B-V nuclear data evaluations. The materials and reactions have been chosen according to the specifications of ORNL-5517. Several cross section covariances, other than those specified by that report, are included due to the derived nature of the uncertainty files in ENDF/B-V. The materials represented are Ni, Cr, /sup 16/O, /sup 12/C, Fe, Na, /sup 235/U, /sup 238/U, /sup 239/Pu, /sup 240/Pu, /sup 241/Pu, and /sup 10/B (present due to its correlation to /sup 238/U). The data have been originally processed into a 52-group energy structure by PUFF-II and subsequently collapsed to smaller subgroup strutures. The results are illustrated in 52-group correlation matrix plots and tabulated into thirteen groups for convenience.
Covariance-enhanced discriminant analysis.
Xu, Peirong; Zhu, J I; Zhu, Lixing; Li, Y I
Linear discriminant analysis has been widely used to characterize or separate multiple classes via linear combinations of features. However, the high dimensionality of features from modern biological experiments defies traditional discriminant analysis techniques. Possible interfeature correlations present additional challenges and are often underused in modelling. In this paper, by incorporating possible interfeature correlations, we propose a covariance-enhanced discriminant analysis method that simultaneously and consistently selects informative features and identifies the corresponding discriminable classes. Under mild regularity conditions, we show that the method can achieve consistent parameter estimation and model selection, and can attain an asymptotically optimal misclassification rate. Extensive simulations have verified the utility of the method, which we apply to a renal transplantation trial.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION
Allen, Genevera I.; Tibshirani, Robert
2015-01-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823
Treatment Effects with Many Covariates and Heteroskedasticity
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...... then propose a new heteroskedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroskedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: (i) parametric linear models with many covariates, (ii......) semiparametric semi-linear models with many technical regressors, and (iii) linear panel models with many fixed effects...
Franklin, Joel N
2003-01-01
Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.
Competing risks and time-dependent covariates
DEFF Research Database (Denmark)
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates, as classi......Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates....... In a multi-state framework, a first approach uses internal covariates to define additional (intermediate) transient states in the competing risks model. Another approach is to apply the landmark analysis as described by van Houwelingen [Scandinavian Journal of Statistics 2007, 34, 70-85] in order to study...
On the Possibility of Ill-Conditioned Covariance Matrices in the First-Order Two-Step Estimator
Garrison, James L.; Axelrod, Penina; Kasdin, N. Jeremy
1997-01-01
The first-order two-step nonlinear estimator, when applied to a problem of orbital navigation, is found to occasionally produce first step covariance matrices with very low eigenvalues at certain trajectory points. This anomaly is the result of the linear approximation to the first step covariance propagation. The study of this anomaly begins with expressing the propagation of the first and second step covariance matrices in terms of a single matrix. This matrix is shown to have a rank equal to the difference between the number of first step states and the number of second step states. Furthermore, under some simplifying assumptions, it is found that the basis of the column space of this matrix remains fixed once the filter has removed the large initial state error. A test matrix containing the basis of this column space and the partial derivative matrix relating first and second step states is derived. This square test matrix, which has dimensions equal to the number of first step states, numerically drops rank at the same locations that the first step covariance does. It is formulated in terms of a set of constant vectors (the basis) and a matrix which can be computed from a reference trajectory (the partial derivative matrix). A simple example problem involving dynamics which are described by two states and a range measurement illustrate the cause of this anomaly and the application of the aforementioned numerical test in more detail.
Realized Matrix-Exponential Stochastic Volatility with Asymmetry, Long Memory and Spillovers
M. Asai (Manabu); C-L. Chang (Chia-Lin); M.J. McAleer (Michael)
2016-01-01
markdownabstractThe paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Directory of Open Access Journals (Sweden)
Peng Wang
2014-01-01
Full Text Available The problem of a stable motion for the quadruped search-rescue robots is described as a variance constrained uncertainty in the discrete systems. According to the model structure of the quadruped search-rescue robot, the kinematics of the robot is analyzed on the basis of the D-H parameter. Each joint of the robot angular velocity is planned using the Jacobian matrix, because the angular velocity is directly related to the stability of walking based on the ADAMS simulation. The nonfragile control method with the covariance constraint is proposed for the gait motion control of the quadruped search-rescue robot. The motion state feedback controller and the covariance upper bounds can be given by the solutions of the linear matrix inequalities (LMI, which makes the system satisfy the covariance constrain theory. The results given by LMI indicate that the proposed control method is correct and effective.
Covariant quantizations in plane and curved spaces
Energy Technology Data Exchange (ETDEWEB)
Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)
2017-07-15
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Covariate Balancing through Naturally Occurring Strata.
Alemi, Farrokh; ElRafey, Amr; Avramovic, Ivan
2016-12-14
To provide an alternative to propensity scoring (PS) for the common situation where there are interacting covariates. We used 1.3 million assessments of residents of the United States Veterans Affairs nursing homes, collected from January 1, 2000, through October 9, 2012. In stratified covariate balancing (SCB), data are divided into naturally occurring strata, where each stratum is an observed combination of the covariates. Within each stratum, cases with, and controls without, the target event are counted; controls are weighted to be as frequent as cases. This weighting procedure guarantees that covariates, or combination of covariates, are balanced, meaning they occur at the same rate among cases and controls. Finally, impact of the target event is calculated in the weighted data. We compare the performance of SCB, logistic regression (LR), and propensity scoring (PS) in simulated and real data. We examined the calibration of SCB and PS in predicting 6-month mortality from inability to eat, controlling for age, gender, and nine other disabilities for 296,051 residents in Veterans Affairs nursing homes. We also performed a simulation study, where outcomes were randomly generated from treatment, 10 covariates, and increasing number of covariate interactions. The accuracy of SCB, PS, and LR in recovering the simulated treatment effect was reported. In simulated environment, as the number of interactions among the covariates increased, SCB and properly specified LR remained accurate but pairwise LR and pairwise PS, the most common applications of these tools, performed poorly. In real data, application of SCB was practical. SCB was better calibrated than linear PS, the most common method of PS. In environments where covariates interact, SCB is practical and more accurate than common methods of applying LR and PS. © Health Research and Educational Trust.
Earth Observing System Covariance Realism Updates
Ojeda Romero, Juan A.; Miguel, Fred
2017-01-01
This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Attenuation caused by infrequently updated covariates in survival analysis
DEFF Research Database (Denmark)
Andersen, Per Kragh; Liestøl, Knut
2003-01-01
Attenuation; Cox regression model; Measurement errors; Survival analysis; Time-dependent covariates......Attenuation; Cox regression model; Measurement errors; Survival analysis; Time-dependent covariates...
Directory of Open Access Journals (Sweden)
Daniel Bartz
Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
A sparse Ising model with covariates.
Cheng, Jie; Levina, Elizaveta; Wang, Pei; Zhu, Ji
2014-12-01
There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use ℓ1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail. © 2014, The International Biometric Society.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
Bodewig, E
1959-01-01
Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well
Variance and covariance estimates for weaning weight of Senepol cattle.
Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S
1991-10-01
Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.
The phenotypic and genetic covariance structure of drosphilid wings.
McGuigan, Katrina; Blows, Mark W
2007-04-01
Evolutionary constraint results from the interaction between the distribution of available genetic variation and the position of selective optima. The availability of genetic variance in multitrait systems, as described by the additive genetic variance-covariance matrix (G), has been the subject of recent attempts to assess the prevalence of genetic constraints. However, evolutionary constraints have not yet been considered from the perspective of the phenotypes available to multivariate selection, and whether genetic variance is present in all phenotypes potentially under selection. Determining the rank of the phenotypic variance-covariance matrix (P) to characterize the phenotypes available to selection, and contrasting it with the rank of G, may provide a general approach to determining the prevalence of genetic constraints. In a study of a laboratory population of Drosophila bunnanda from northern Australia we applied factor-analytic modeling to repeated measures of individual wing phenotypes to determine the dimensionality of the phenotypic space described by P. The phenotypic space spanned by the 10 wing traits had 10 statistically supported dimensions. In contrast, factor-analytic modeling of G estimated for the same 10 traits from a paternal half-sibling breeding design suggested G had fewer dimensions than traits. Statistical support was found for only five and two genetic dimensions, describing a total of 99% and 72% of genetic variance in wing morphology in females and males, respectively. The observed mismatch in dimensionality between P and G suggests that although selection might act to shift the intragenerational population mean toward any trait combination, evolution may be restricted to fewer dimensions.
Supergauge Field Theory of Covariant Heterotic Strings
Michio, KAKU; Physics Department, Osaka University : Physics Department, City College of the City University of New York
1986-01-01
We present the gauge covariant second quantized field theory for free heterotic strings, which is leading candidate for a unified theory of all known particles. Our action is invariant under the semi-direct product of the super Virasoro and the Kac-Moody E_8×E_8 or Spin(32)/Z_2 group. We derive the covariant action by path integrals in the same way that Feynman originally derived the Schrodinger equation. By adding an infinite number of auxiliary fields, we can also make the action explicitly...
Activities on covariance estimation in Japanese Nuclear Data Committee
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Covariant Deformation Quantization of Free Fields
Harrivel, Dikanaina
2006-01-01
We define covariantly a deformation of a given algebra, then we will see how it can be related to a deformation quantization of a class of observables in Quantum Field Theory. Then we will investigate the operator order related to this deformation quantization.
Observed Score Linear Equating with Covariates
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
(co)variances for growth and efficiency
African Journals Online (AJOL)
42, 295. CANTET, R.J.C., KRESS, D.D., ANDERSON, D.C., DOORNBOS, D.E., BURFENING, P.J. &. BLACKWELL, R.L., 1988. Direct and maternal variances and covariances and maternal phenotypic effects on preweaning growth of beef cattle. J. Anim. Sci. 66, 648. CUNNINGHAM. E.P., MOON, R.A. & GJEDREN, T., 1970.
Galilean Covariance and the Gravitational Field
Ulhoa, S. C.; Khanna, F. C.; Santana, A.E.
2009-01-01
The paper is concerned with the development of a gravitational field theory having locally a covariant version of the Galilei group. We show that this Galilean gravity can be used to study the advance of perihelion of a planet, following in parallel with the result of the (relativistic) theory of general relativity in the post-Newtonian approximation.
On translation-covariant quantum Markov equations
Holevo, A. S.
1995-04-01
The structure of quantum Markov control equations with unbounded generators and covariant with respect to 1) irreducible representation of the Weyl CCR on R^d and 2) representation of the group of R^d, is completely described via non-commutative Levy-Khinchin-type formulae. The existence and uniqueness of solutions for such equations is briefly discussed.
Unravelling Lorentz Covariance and the Spacetime Formalism
Directory of Open Access Journals (Sweden)
Cahill R. T.
2008-10-01
Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space-time construct are consistent with the existence of a dynamical 3-space, and absolute motion. We illustrate this mapping first with the standard theory of sound, as vibrations of a medium, which itself may be undergoing fluid motion, and which is covariant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under Lorentz transformations wherein the speed of sound is now the invariant speed. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian spacetime description of sound, with a metric characterised by an invariant speed of sound. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equations were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a allowing dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations
Unravelling Lorentz Covariance and the Spacetime Formalism
Directory of Open Access Journals (Sweden)
Cahill R. T.
2008-10-01
Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space- time construct are consistent with the existence of a dynamical 3-space, and “absolute motion”. We illustrate this mapping first with the standard theory of sound, as vibra- tions of a medium, which itself may be undergoing fluid motion, and which is covari- ant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under “Lorentz transformations” wherein the speed of sound is now the “invariant speed”. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian “spacetime” description of sound, with a metric characterised by an “invariant speed of sound”. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equa- tions were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a “flowing” dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations
Maximum a posteriori covariance estimation using a power inverse wishart prior
DEFF Research Database (Denmark)
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximu...... class of prior distributions generalizing the inverse Wishart prior, discuss its properties, and demonstrate the estimator on simulated and real data.......The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...
Covariances of the few-group homogenized cross-sections for diffusion calculation
Energy Technology Data Exchange (ETDEWEB)
Sánchez-Cervera, S.; Castro, S.; García-Herranz, N.
2015-07-01
In the context of the NEA/OECD benchmark for Uncertainty Analysis in Modelling (UAM), Exercise I-3 consists of neutronic calculations to propagate uncertainties to core parameters such as k-effective or power distribution. In core simulators, the input uncertainties arise, among others, from few-group lattice-averaged cross-section uncertainties. In this paper, an analysis of those uncertainties due to nuclear data is performed. The core analyzed in Exercise I-3 is the initial loading of the PWR TMI-1, composed by 11 different types of fuel assemblies. By statistically sampling the nuclear data input, the sequence SAMPLER from SCALE system (using its NEWT lattice code) allows to obtain the few-group homogenized cross-sections and with a statistical analysis generates the covariance matrices. The correlations among different reactions and energy groups of the covariance matrices are analyzed. The impact of burnable poisons, control rods or the environment of the assembly is also assessed. It is shown the importance of the correlation between different assembly types. The global covariance matrix will permit to compute the uncertainties in k-eff in a core simulator, once sensitivity coefficients are known. Only if the complete covariance matrix is considered, similar uncertainties to the ones provided by other methodologies are obtained. (Author)
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2004-01-01
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....
Covariance of dynamic strain responses for structural damage detection
Li, X. Y.; Wang, L. X.; Law, S. S.; Nie, Z. H.
2017-10-01
A new approach to address the practical problems with condition evaluation/damage detection of structures is proposed based on the distinct features of a new damage index. The covariance of strain response function (CoS) is a function of modal parameters of the structure. A local stiffness reduction in structure would cause monotonous increase in the CoS. Its sensitivity matrix with respect to local damages of structure is negative and narrow-banded. The damage extent can be estimated with an approximation to the sensitivity matrix to decouple the identification equations. The CoS sensitivity can be calibrated in practice from two previous states of measurements to estimate approximately the damage extent of a structure. A seven-storey plane frame structure is numerically studied to illustrate the features of the CoS index and the proposed method. A steel circular arch in the laboratory is tested. Natural frequencies changed due to damage in the arch and the damage occurrence can be judged. However, the proposed CoS method can identify not only damage happening but also location, even damage extent without need of an analytical model. It is promising for structural condition evaluation of selected components.
A Path Following Algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)
2008-07-24
the performance of covariance matrix estimates used by classifiers based on linear discriminant analysis (Bickel and Levina , 2004) and in Kalman...selection method is presented in Bilmes (2000). More recently, Bickel and Levina (2008) have obtained conditions ensuring consistency in the operator...lattice systems. Journal of the Royal Statistical Society, Series B 36, 2, 192–236. Bickel, P. and Levina , E. 2004. Some theory for fisher’s linear
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan
2011-10-10
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
Directory of Open Access Journals (Sweden)
Berge Léonie
2016-01-01
Full Text Available As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.
Linear covariance analysis for gimbaled pointing systems
Christensen, Randall S.
Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation
Covariance and the hierarchy of frame bundles
Estabrook, Frank B.
1987-01-01
This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.
Torsion and geometrostasis in covariant superstrings
Energy Technology Data Exchange (ETDEWEB)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Batch Covariance Relaxation (BCR) Adaptive Processing.
1981-08-01
techniques dictates the need for processing flexibility which may be met most easily by a digital mechanization. The effort conducted addresses the...essential aspects of Batch Covariance Relaxation (BCR) adaptive processing applied to a digital adaptive array processing. In contrast to dynamic... libarary , RADAR:LIB. An extensive explanation as to how to use these programs is given. It is shown how the output of each is used as part of the input for
Covariance Kernels from Bayesian Generative Models
Seeger, Matthias
2002-01-01
We propose the framework of mutual information kernels for learning covariance kernels, as used in Support Vector machines and Gaussian process classifiers, from unlabeled task data using Bayesian techniques. We describe an implementation of this framework which uses variational Bayesian mixtures of factor analyzers in order to attack classification problems in high-dimensional spaces where labeled data is sparse, but unlabeled data is abundant.
Linear Covariance Analysis for a Lunar Lander
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
On conservativity of covariant dynamical semigroups
Holevo, A. S.
1993-10-01
The notion of form-generator of a dynamical semigroup is introduced and used to give a criterion for the conservativity (preservation of the identity) of covariant dynamical semigroups. It allows to reduce the problem of construction of conservative dynamical semigroups to familiar problems of non-explosion for Markov processes and construction of a contraction semigroup in a Hilbert space. Some new classes of unbounded generators, related to the Levy-Khinchin formula, are described.
Covariance tracking: architecture optimizations for embedded systems
Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan
2014-12-01
Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.
Development of covariance capabilities in EMPIRE code
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Energy Technology Data Exchange (ETDEWEB)
Putter, Roland de; Wagner, Christian; Verde, Licia [ICC, University of Barcelona (IEEC-UB), Marti i Franques 1, Barcelona 08028 (Spain); Mena, Olga [Instituto de Física Corpuscular, Universidad de Valencia-CSIC, C/ Catedrático José Beltrán 2, Paterna (Spain); Percival, Will J., E-mail: rdeputter@berkeley.edu, E-mail: cwagner@icc.ub.edu, E-mail: omena@ific.uv.es, E-mail: liciaverde@icc.ub.edu, E-mail: will.percival@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Bldg., Portsmouth, PO1 3FX (United Kingdom)
2012-04-01
Accurate power spectrum (or correlation function) covariance matrices are a crucial requirement for cosmological parameter estimation from large scale structure surveys. In order to minimize reliance on computationally expensive mock catalogs, it is important to have a solid analytic understanding of the different components that make up a covariance matrix. Considering the matter power spectrum covariance matrix, it has recently been found that there is a potentially dominant effect on mildly non-linear scales due to power in modes of size equal to and larger than the survey volume. This beat coupling effect has been derived analytically in perturbation theory and while it has been tested with simulations, some questions remain unanswered. Moreover, there is an additional effect of these large modes, which has so far not been included in analytic studies, namely the effect on the estimated average density which enters the power spectrum estimate. In this article, we work out analytic, perturbation theory based expressions including both the beat coupling and this local average effect and we show that while, when isolated, beat coupling indeed causes large excess covariance in agreement with the literature, in a realistic scenario this is compensated almost entirely by the local average effect, leaving only ∼ 10% of the excess. We test our analytic expressions by comparison to a suite of large N-body simulations, using both full simulation boxes and subboxes thereof to study cases without beat coupling, with beat coupling and with both beat coupling and the local average effect. For the variances, we find excellent agreement with the analytic expressions for k < 0.2 hMpc{sup −1} at z = 0.5, while the correlation coefficients agree to beyond k = 0.4 hMpc{sup −1}. As expected, the range of agreement increases towards higher redshift and decreases slightly towards z = 0. We finish by including the large-mode effects in a full covariance matrix description for
Tucker, Bram
2007-06-01
This paper begins with the hypothesis that Mikea, participants in a mixed foraging-fishing-farming-herding economy of southwestern Madagascar, may attempt to reduce interannual variance in food supply caused by unpredictable rainfall by following a simple rule-of-thumb: Practice an even mix of activities that covary positively with rainfall and activities that covary negatively with rainfall. Results from a historical matrix participatory exercise confirm that Mikea perceive that foraging and farming outcomes covary positively or negatively with rainfall. This paper further considers whether Mikea learn about covariation through personal observation and memory recall (individual learning) or through socially transmitted ethnotheory (social learning). Dual inheritance theory models by Boyd and Richerson (1988) predict that individual learning is more effective in spatially and temporally variable environments such as the Mikea Forest. In contrast, the psychological literature suggests that individuals judge covariation poorly when memory of past events is required, unless they share a socially learned theory that a covariation should exist (Nisbett and Ross 1980). Results suggest that Mikea rely heavily on shared ethnotheory when judging covariation, but individuals continually strive to improve their judgment through individual observation.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Directory of Open Access Journals (Sweden)
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
Block-diagonal representations for covariance-based anomalous change detectors
Energy Technology Data Exchange (ETDEWEB)
Matsekh, Anna M [Los Alamos National Laboratory; Theiler, James P [Los Alamos National Laboratory
2010-01-01
We use singular vectors of the whitened cross-covariance matrix of two hyper-spectral images and the Golub-Kahan permutations in order to obtain equivalent tridiagonal representations of the coefficient matrices for a family of covariance-based quadratic Anomalous Change Detection (ACD) algorithms. Due to the nature of the problem these tridiagonal matrices have block-diagonal structure, which we exploit to derive analytical expressions for the eigenvalues of the coefficient matrices in terms of the singular values of the whitened cross-covariance matrix. The block-diagonal structure of the matrices of the RX, Chronochrome, symmetrized Chronochrome, Whitened Total Least Squares, Hyperbolic and Subpixel Hyperbolic Anomalous Change Detectors are revealed by the white singular value decomposition and Golub-Kahan transformations. Similarities and differences in the properties of these change detectors are illuminated by their eigenvalue spectra. We presented a methodology that provides the eigenvalue spectrum for a wide range of quadratic anomalous change detectors. Table I summarizes these results, and Fig. I illustrates them. Although their eigenvalues differ, we find that RX, HACD, Subpixel HACD, symmetrized Chronochrome, and WTLSQ share the same eigenvectors. The eigen vectors for the two variants of Chronochrome defined in (18) are different, and are different from each other, even though they share many (but not all, unless d{sub x} = d{sub y}) eigenvalues. We demonstrated that it is sufficient to compute SVD of the whitened cross covariance matrix of the data in order to almost immediately obtain highly structured sparse matrices (and their eigenvalue spectra) of the coefficient matrices of these ACD algorithms in the white SVD-transformed coordinates. Converting to the original non-white coordinates, these eigenvalues will be modified in magnitude but not in sign. That is, the number of positive, zero-valued, and negative eigenvalues will be conserved.
Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions
Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.
2011-12-01
Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.
Covariance differences of linealy representable sequences in hilbert ...
African Journals Online (AJOL)
TThe paper introduces the concepts of covariance differences of a sequence and establishes its relationship with the covariance function. One of the main results of this paper is the criteria of linear representability of sequences in Hilbert spaces.
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Eddy covariance based methane flux in Sundarbans mangroves, India
Indian Academy of Sciences (India)
Eddy covariance based methane flux in Sundarbans mangroves, India ... Eddy covariance; mangrove forests; methane flux; Sundarbans. ... In order to quantify the methane flux in mangroves, an eddy covariance flux tower was recently erected in the largest unpolluted and undisturbed mangrove ecosystem in Sundarbans ...
Earth Observation System Flight Dynamics System Covariance Realism
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Inferring Meta-covariates in Classification
Harris, Keith; McMillan, Lisa; Girolami, Mark
This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Anomalies in covariant W-gravity
Ceresole, Anna T.; Frau, Marialuisa; McCarthy, Jim; Lerda, Alberto
1991-08-01
We consider free scalar matter covariantly coupled to background W-gravity. Expanding to second order in the W-gravity fields, we study the appropriate anomalous Ward-Takahashi identities and find the counterterms which maintain diffeomorphism invariance and its W-analogue. We see that a redefinition of the vielbein transformation rule under W-diffeomorphism is required in order to cancel nonlocal contributions to the anomaly. Moreover, we explicitly write all gauge invariances at this order. Some consequences of these results for the chiral gauge quantization are discussed. On leave of absence from Dipartimento di Fisica Teorica, Università di Torino, Turin, Italy.
Gallilei covariant quantum mechanics in electromagnetic fields
Directory of Open Access Journals (Sweden)
H. E. Wilhelm
1985-01-01
Full Text Available A formulation of the quantum mechanics of charged particles in time-dependent electromagnetic fields is presented, in which both the Schroedinger equation and wave equations for the electromagnetic potentials are Galilei covariant, it is shown that the Galilean relativity principle leads to the introduction of the electromagnetic substratum in which the matter and electromagnetic waves propagate. The electromagnetic substratum effects are quantitatively significant for quantum mechanics in reference frames, in which the substratum velocity w is in magnitude comparable with the velocity of light c. The electromagnetic substratum velocity w occurs explicitly in the wave equations for the electromagnetic potentials but not in the Schroedinger equation.
Minimal covariant observables identifying all pure states
Energy Technology Data Exchange (ETDEWEB)
Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2013-09-02
It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.
Bhatia, Rajendra
1997-01-01
A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...
Covariance analysis for evaluating head trackers
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
Stochastic precipitation generator with hidden state covariates
Kim, Yongku; Lee, GyuWon
2017-08-01
Time series of daily weather such as precipitation, minimum temperature and maximum temperature are commonly required for various fields. Stochastic weather generators constitute one of the techniques to produce synthetic daily weather. The recently introduced approach for stochastic weather generators is based on generalized linear modeling (GLM) with covariates to account for seasonality and teleconnections (e.g., with the El Niño). In general, stochastic weather generators tend to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this overdispersion, we incorporated time series of seasonal dry/wet indicators in the GLM weather generator as covariates. These seasonal time series were local (or global) decodings obtained by a hidden Markov model of seasonal total precipitation and implemented in the weather generator. The proposed method is applied to time series of daily weather from Seoul, Korea and Pergamino, Argentina. This method provides a straightforward translation of the uncertainty of the seasonal forecast to the corresponding conditional daily weather statistics.
Covariates of alcohol consumption among career firefighters.
Piazza-Gardner, A K; Barry, A E; Chaney, E; Dodd, V; Weiler, R; Delisle, A
2014-12-01
Little is known about rates of alcohol consumption in career firefighters. To assess the quantity and frequency of alcohol consumption among career firefighters and the covariates that influence consumption levels. A convenience sample of career firefighters completed an online, self-administered, health assessment survey. Hierarchical binary logistic regression assessed the ability of several covariates to predict binge drinking status. The majority of the sample (n = 160) consumed alcohol (89%), with approximately one-third (34%) having a drinking binge in the past 30 days. The regression model explained 13-18% of the variance in binge drinking status and correctly classified 71% of cases. Race (P firefighters were 1.08 times less likely to binge drink (95% CI: 0.87-0.97). Drinking levels observed in this study exceed those of the general adult population, including college students. Thus, it appears that firefighters represent an at-risk drinking group. Further investigations addressing reasons for alcohol use and abuse among firefighters are warranted. This study and subsequent research will provide information necessary for the development and testing of tailored interventions aimed at reducing firefighter alcohol consumption. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Meier, Timothy B; Wildenberg, Joseph C; Liu, Jingyu; Chen, Jiayu; Calhoun, Vince D; Biswal, Bharat B; Meyerand, Mary E; Birn, Rasmus M; Prabhakaran, Vivek
2012-01-01
Parallel Independent Component Analysis (para-ICA) is a multivariate method that can identify complex relationships between different data modalities by simultaneously performing Independent Component Analysis on each data set while finding mutual information between the two data sets. We use para-ICA to test the hypothesis that spatial sub-components of common resting state networks (RSNs) covary with specific behavioral measures. Resting state scans and a battery of behavioral indices were collected from 24 younger adults. Group ICA was performed and common RSNs were identified by spatial correlation to publically available templates. Nine RSNs were identified and para-ICA was run on each network with a matrix of behavioral measures serving as the second data type. Five networks had spatial sub-components that significantly correlated with behavioral components. These included a sub-component of the temporo-parietal attention network that differentially covaried with different trial-types of a sustained attention task, sub-components of default mode networks that covaried with attention and working memory tasks, and a sub-component of the bilateral frontal network that split the left inferior frontal gyrus into three clusters according to its cytoarchitecture that differentially covaried with working memory performance. Additionally, we demonstrate the validity of para-ICA in cases with unbalanced dimensions using simulated data.
Bertram, Susan M; Fitzsimmons, Lauren P; McAuley, Emily M; Rundle, Howard D; Gorelick, Root
2012-01-01
The phenotypic variance–covariance matrix (P) describes the multivariate distribution of a population in phenotypic space, providing direct insight into the appropriateness of measured traits within the context of multicollinearity (i.e., do they describe any significant variance that is independent of other traits), and whether trait covariances restrict the combinations of phenotypes available to selection. Given the importance of P, it is therefore surprising that phenotypic covariances are seldom jointly analyzed and that the dimensionality of P has rarely been investigated in a rigorous statistical framework. Here, we used a repeated measures approach to quantify P separately for populations of four cricket species using seven acoustic signaling traits thought to enhance mate attraction. P was of full or almost full dimensionality in all four species, indicating that all traits conveyed some information that was independent of the other traits, and that phenotypic trait covariances do not constrain the combinations of signaling traits available to selection. P also differed significantly among species, although the dominant axis of phenotypic variation (pmax) was largely shared among three of the species (Acheta domesticus, Gryllus assimilis, G. texensis), but different in the fourth (G. veletis). In G. veletis and A. domesticus, but not G. assimilis and G. texensis, pmax was correlated with body size, while pmax was not correlated with residual mass (a condition measure) in any of the species. This study reveals the importance of jointly analyzing phenotypic traits. PMID:22408735
Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.
Mazumder, Rahul; Hastie, Trevor
2012-03-01
We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.
Covariant differential calculus on the quantum exterior vector space
Parashar, Preeti; Soni, S. K.
1992-12-01
We formulate a differential calculus on the quantum exterior vector space spanned by the generators of a non-anticommutative algebra satisfying 10052_2005_Article_BF01559737_TeX2GIFE1.gif begin{gathered} r^{ij} equiv θ ^i θ ^j + B_{kl}^{ij} θ ^k θ ^l = 0i,j = 1,2,...,n. \\ and \\ (θ ^i )^2 = (θ ^j )^2 = ... = (θ ^n )^2 = 0, \\ where B {kl/ij} is the most general matrix defined in terms of complex deformation parameters. Following considerations analogous to those of Wess and Zumino, we are able to exhibit covariance of our calculus under ( n/2)+1 parameter deformation of GL(n) and explicitly check that the non-anticommutative differential calculus satisfies the general constraints given by them, such as the “linear” condition dr ij ≃0 and the “quadratic” condition r ij x n ≃0 where x n = dλ n are the differentials of the variables.
Thermal NDT applying Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT)
Yousefi, Bardia; Sfarra, Stefano; Ibarra Castanedo, Clemente; Maldague, Xavier P. V.
2017-05-01
Thermal and infrared imagery creates considerable developments in Non-destructive Testing (NDT) area. An analysis for thermal NDT inspection is addressed applying a new technique for computation of eigen-decomposition (factor analysis) similar to Principal Component Thermography(PCT). It is referred as Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT). The proposed approach uses a computational short-cut to estimate covariance matrix and Singular Value Decomposition(SVD) to obtain faster PCT results, but while the dimension of the data increases. The problem of computational cost for high-dimensional thermal image acquisition is also investigated. Three types of specimens (CFRP, plexiglass and aluminum) have been used for comparative benchmarking. Then, a clustering algorithm segments the defect at the surface of the specimens. The results conclusively indicate the promising performance and demonstrated a confirmation for the outlined properties.
Assessment of the Gaussian Covariance Approximation over an Earth-Asteroid Encounter Period
Mattern, Daniel W.
2017-01-01
In assessing the risk an asteroid may pose to the Earth, the asteroids state is often predicted for many years, often decades. Only by accounting for the asteroids initial state uncertainty can a measure of the risk be calculated. With the asteroids state uncertainty growing as a function of the initial velocity uncertainty, orbit velocity at the last state update, and the time from the last update to the epoch of interest, the asteroids position uncertainties can grow to many times the size of the Earth when propagated to the encounter risk corridor. This paper examines the merits of propagating the asteroids state covariance as an analytical matrix. The results of this study help to bound the efficacy of applying different metrics for assessing the risk an asteroid poses to the Earth. Additionally, this work identifies a criterion for when different covariance propagation methods are needed to continue predictions after an Earth-encounter period.
Covariant Hyperbolization of Force-free Electrodynamics
Carrasco, Federico
2016-01-01
Force-Free Flectrodynamics (FFE) is a non-linear system of equations modeling the evolution of the electromagnetic field, in the presence of a magnetically dominated relativistic plasma. This configuration arises on several astrophysical scenarios, which represent exciting laboratories to understand physics in extreme regimes. We show that this system, when restricted to the correct constraint submanifold, is symmetric hyperbolic. In numerical applications is not feasible to keep the system in that submanifold, and so, it is necessary to analyze its structure first in the tangent space of that submanifold and then in a whole neighborhood of it. As already shown by Pfeiffer, a direct (or naive) formulation of this system (in the whole tangent space) results in a weakly hyperbolic system of evolution equations for which well-possednes for the initial value formulation does not follows. Using the generalized symmetric hyperbolic formalism due to Geroch, we introduce here a covariant hyperbolization for the FFE s...
Covariant perturbations in the gonihedric string model
Rojas, Efraín
2017-11-01
We provide a covariant framework to study classically the stability of small perturbations on the so-called gonihedric string model by making precise use of variational techniques. The local action depends on the square root of the quadratic mean extrinsic curvature of the worldsheet swept out by the string, and is reparametrization invariant. A general expression for the worldsheet perturbations, guided by Jacobi equations without any early gauge fixing, is obtained. This is manifested through a set of highly coupled nonlinear differential partial equations where the perturbations are described by scalar fields, Φi, living in the worldsheet. This model contains, as a special limit, to the linear model in the mean extrinsic curvature. In such a case the Jacobi equations specialize to a single wave-like equation for Φ.
EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.
Energy Technology Data Exchange (ETDEWEB)
HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.
2007-04-22
The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.
Covariant non-commutative space–time
Directory of Open Access Journals (Sweden)
Jonathan J. Heckman
2015-05-01
Full Text Available We introduce a covariant non-commutative deformation of 3+1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space–time isometries. The non-commutative algebra is defined on space–times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so(5,1, while for AdS4 it assembles into so(4,2. The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
A covariant approach to entropic dynamics
Ipek, Selman; Abedi, Mohammad; Caticha, Ariel
2017-06-01
Entropic Dynamics (ED) is a framework for constructing dynamical theories of inference using the tools of inductive reasoning. A central feature of the ED framework is the special focus placed on time. In [2] a global entropic time was used to derive a quantum theory of relativistic scalar fields. This theory, however, suffered from a lack of explicit or manifest Lorentz symmetry. In this paper we explore an alternative formulation in which the relativistic aspects of the theory are manifest. The approach we pursue here is inspired by the methods of Dirac, Kuchař, and Teitelboim in their development of covariant Hamiltonian approaches. The key ingredient here is the adoption of a local notion of entropic time, which allows compatibility with an arbitrary notion of simultaneity. However, in order to ensure that the evolution does not depend on the particular sequence of hypersurfaces, we must impose a set of constraints that guarantee a consistent evolution.
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Ex post facto authorisation in south african environmental ...
African Journals Online (AJOL)
Environmental impact assessment (EIA) is therefore a systematic and integrative process for considering possible impacts prior to a decision being taken on whether or not a proposal should be given approval to proceed. This article argues that the current legislative basis for environmental assessment in South Africa, ...
Ex-post Liability Rules in Modern Patent Law
R.J.F.M. Castro Bernieri (Rosa)
2010-01-01
textabstractThis book examines alternative ways of protecting patent rights using the law and economics framework of property and liability rules. Traditional compulsory licenses are compared with the most recent discussions on the choice between granting or denying injunctive relief
Enforcing Margin Squeeze Ex Post Across Converging Telecommunications Markets
DEFF Research Database (Denmark)
Bergqvist, Christian; Townsend, John
, delay profitability or limit their ability to remain or expand on markets. However, traditional market definitions are being challenged by (1) the technological convergence of services and (2) innovative product offerings taking advantage of this convergence. Consumers now routinely purchase a bundle...... and innovation present both theoretical and practical difficulties for assessing “muddled margins” on telecoms markets. New and different enforcement approaches to exclusion will have to be formulated within the Article 102 framework and tested in the Courts. This may even require abstaining from applying...
Niger; Ex Post Assessment of Longer-Term Program Engagement
International Monetary Fund
2011-01-01
IMF engagement with Niger since 2005 has remained constructive. IMF-supported programs have contributed to the authorities’ goals of macroeconomic stability, growth, and human development progress. Development of Niger’s uranium and petroleum resources provides an important opportunity to raise the living standards of Niger’s citizens. Institutional reforms aimed at enhancing the efficient use of resource revenues and transparency of public finances will remain critical to maximize benefits a...
Performance Analysis of Tyler's Covariance Estimator
Soloveychik, Ilya; Wiesel, Ami
2015-01-01
This paper analyzes the performance of Tyler's M-estimator of the scatter matrix in elliptical populations. We focus on the non-asymptotic setting and derive the estimation error bounds depending on the number of samples n and the dimension p. We show that under quite mild conditions the squared Frobenius norm of the error of the inverse estimator decays like p^2/n with high probability.
Kawano, T
2003-01-01
Evaluation of covariances for resolved resonance parameters of sup 2 sup 3 sup 5 U, sup 2 sup 3 sup 8 U, and sup 2 sup 3 sup 9 Pu was carried out. Although a large number of resolved resonances are observed for major actinides, uncertainties in averaged cross sections are more important than those in resonance parameters in reactor calculations. We developed a simple method which derives a covariance matrix for the resolved resonance parameters from uncertainties in the averaged cross sections. The method was adopted to evaluate the covariance data for some important actinides, and the results were compiled in the JENDL-3.2 covariance file.
Estimating the background covariance error for the Global Data Assimilation System of CPTEC/INPE
Bastarz, C. F.; Goncalves, L.
2013-05-01
The global data assimilation system at CPTEC/INPE, named G3Dvar is based in the Gridoint Statistical Interpolation (GSI/NCEP/GMAO) and in the general circulation model from that same center (GCM/CPTEC/INPE). The G3Dvar is a tri-dimensional variational data assimilation system that uses a Background Error Covariance Matrix (BE) fixed (in its current implementation, it uses the matrix from Global Forecast System - GFS/NCEP). The goal of this work is to present the preliminary results of the calculation of the new BE based on the GCM/CPTEC/INPE using a methodology similar to the one used for the GSI/WRFDA, called gen_be. The calculation is done in 5 distinct steps in the analysis increment space. (a) stream function and potential velocity are determined from the wind fields; (b) the mean of the stream function and potential velocity are calculated in order to obtain the perturbation fields for the remaing variables (streamfunction, potencial velocity, temperature, relative humidity and surface pressure); (c) the covariances of the perturbation fields, regression coeficients and balance between streamfunction, temperature and surface pressure are estimated. For this particular system, i.e. GCM/CPTEC/INPE, the necessity for constrains towards the statistical balance between streamfuncion and potential velocity, temperature and surface pressure will be evaluated as well as the how it affects the BE matrix calculation. Hence, this work will investigate the necessary procedures for calculating BE and show how does that differs from the standard calculation and how it is calibrated/adjusted based on the GCM/CPTEC/INPE. Results from a comparison between the main differences between the GFS BE and the newly calculated GCM/CPTEC/INPE BE are discussed in addition to an impact study using the different background error covariance matrices.
AFCI-2.0 Library of Neutron Cross Section Covariances
Energy Technology Data Exchange (ETDEWEB)
Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.
2011-06-26
Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.
Energy Technology Data Exchange (ETDEWEB)
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
COVARIANCE ESTIMATION USING CONJUGATE GRADIENT FOR 3D CLASSIFICATION IN CRYO-EM.
Andén, Joakim; Katsevich, Eugene; Singer, Amit
2015-04-01
Classifying structural variability in noisy projections of biological macromolecules is a central problem in Cryo-EM. In this work, we build on a previous method for estimating the covariance matrix of the three-dimensional structure present in the molecules being imaged. Our proposed method allows for incorporation of contrast transfer function and non-uniform distribution of viewing angles, making it more suitable for real-world data. We evaluate its performance on a synthetic dataset and an experimental dataset obtained by imaging a 70S ribosome complex.
Selection and evolution of causally covarying traits.
Morrissey, Michael B
2014-06-01
When traits cause variation in fitness, the distribution of phenotype, weighted by fitness, necessarily changes. The degree to which traits cause fitness variation is therefore of central importance to evolutionary biology. Multivariate selection gradients are the main quantity used to describe components of trait-fitness covariation, but they quantify the direct effects of traits on (relative) fitness, which are not necessarily the total effects of traits on fitness. Despite considerable use in evolutionary ecology, path analytic characterizations of the total effects of traits on fitness have not been formally incorporated into quantitative genetic theory. By formally defining "extended" selection gradients, which are the total effects of traits on fitness, as opposed to the existing definition of selection gradients, a more intuitive scheme for characterizing selection is obtained. Extended selection gradients are distinct quantities, differing from the standard definition of selection gradients not only in the statistical means by which they may be assessed and the assumptions required for their estimation from observational data, but also in their fundamental biological meaning. Like direct selection gradients, extended selection gradients can be combined with genetic inference of multivariate phenotypic variation to provide quantitative prediction of microevolutionary trajectories. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Optimum allocation in multivariate stratified random sampling: Stochastic matrix optimisation
Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio
2011-01-01
The allocation problem for multivariate stratified random sampling as a problem of stochastic matrix integer mathematical programming is considered. With these aims the asymptotic normality of sample covariance matrices for each strata is established. Some alternative approaches are suggested for its solution. An example is solved by applying the proposed techniques.
Deducing Cosmological Observables from the S-matrix
Miao, S. P.|info:eu-repo/dai/nl/314122389; Prokopec, T.|info:eu-repo/dai/nl/326113398; Woodard, R.P.
2017-01-01
We study one loop quantum gravitational corrections to the long range force induced by the exchange of a massless scalar between two massive scalars. The various diagrams contributing to the flat space S-matrix are evaluated in a general covariant gauge and we show that dependence on the gauge
DEFF Research Database (Denmark)
Hounyo, Ulrich
to a gneral class of estimators of integrated covolatility. We then show the first-order asymptotic validity of this method in the multivariate context with a potential presence of jumps, dependent microsturcture noise, irregularly spaced and non-synchronous data. Due to our focus on non...... the finite sample properties of the existing first-order asymptotic theory. We illustrate its practical use on high-frequency equity data...
Estimation of the Mean of a Normal Distribution with Singular Covariance Matrix.
1978-11-01
quadratic Loss. DO ~~~~~~~~ 1473 S/N 0 i 0 2. 0 1 4. 0 4 0 i , UN~~.ASSIFTED SECURITY C L A U I P I C A T I O W OP TwiS . 4 4 3 1 (~~~~, D.e. lPU .’S~ ) - • _ _ ~~~~~~~ ~~~~ ---~~~~ - • -- ~~~~~~~~~~ -
Improving the ensemble optimization method through covariance matrix adaptation (CMA-EnOpt)
Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.
2013-01-01
Ensemble Optimization (EnOpt) is a rapidly emerging method for reservoir model based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current implementations of EnOpt use a Gaussian ensemble with a
Evolution Strategies with Optimal Covariance Matrix Update Applied to Sustainable Wave Energy
DEFF Research Database (Denmark)
Rodríguez Arbonès, Dídac
Modern society depends heavily on fossil fuels. We rely on this source of energy for everything, from food and clothing production to daily transportation. Even the Internet is mostly powered by these sources of energy. This reliance has led us to a high-risk situation where all that we take...
Statistical meaning of the differential Mueller matrix of depolarizing homogeneous media.
Ossikovski, Razvigor; Arteaga, Oriol
2014-08-01
By applying the statistical definition of a depolarizing Mueller matrix we formally derive and physically interpret the differential matrix of a depolarizing homogeneous medium. The depolarization phenomenon being a direct consequence of the fluctuations of the six elementary polarization properties of the medium, the differential matrix contains the mean values and the variances of the properties, thus fully describing those from a statistical viewpoint. Similarly, the reduced coherency matrix associated with the G-symmetric component of the differential matrix has an immediate physical interpretation as being the covariance matrix of the three basic groups of polarization properties. The formal developments are illustrated on experimental examples.
Comparative test on several forms of background error covariance in 3DVar
Shao, Aimei
2013-04-01
The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the
Recurrence Analysis of Eddy Covariance Fluxes
Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael
2015-04-01
The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.
Empirical Performance of Covariates in Education Observational Studies
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate
2017-01-01
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Electron localization functions and local measures of the covariance
Indian Academy of Sciences (India)
The electron localization measure proposed by Becke and Edgecombe is shown to be related to the covariance of the electron pair distribution. Just as with the electron localization function, the local covariance does not seem to be, in and of itself, a useful quantity for elucidating shell structure. A function of the local ...
Using transformation algorithms to estimate (co)variance ...
African Journals Online (AJOL)
... to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of (co)variance components. Results from a simulation study indicate that (co)variance components can be estimated efficiently at a low cost on ...
Validity of covariance models for the analysis of geographical variation
DEFF Research Database (Denmark)
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained...
Propensity score matching and unmeasured covariate imbalance: A simulation study
Ali, M. Sanni|info:eu-repo/dai/nl/345709497; Groenwold, Rolf H.H.; Belitser, Svetlana V.; Hoes, Arno W.; De Boer, A.|info:eu-repo/dai/nl/075097346; Klungel, Olaf H.|info:eu-repo/dai/nl/181447649
2014-01-01
Background: Selecting covariates for adjustment or inclusion in propensity score (PS) analysis is a trade-off between reducing confounding bias and a risk of amplifying residual bias by unmeasured confounders. Objectives: To assess the covariate balancing properties of PS matching with respect to
Considering Horn’s parallel analysis from a random matrix theory point of view
Saccenti, Edoardo; Timmerman, Marieke E.
Horn’s parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular,
Considering Horn’s Parallel Analysis from a Random Matrix Theory Point of View
Saccenti, Edoardo; Timmerman, Marieke E.
2017-01-01
Horn’s parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
Newton law in covariant unimodular $F(R)$ gravity
Nojiri, S; Oikonomou, V K
2016-01-01
We propose a covariant ghost-free unimodular $F(R)$ gravity theory, which contains a three-form field and study its structure using the analogy of the proposed theory with a quantum system which describes a charged particle in uniform magnetic field. Newton's law in non-covariant unimodular $F(R)$ gravity as well as in unimodular Einstein gravity is derived and it is shown to be just the same as in General Relativity. The derivation of Newton's law in covariant unimodular $F(R)$ gravity shows that it is modified precisely in the same way as in the ordinary $F(R)$ theory. We also demonstrate that the cosmology of a Friedmann-Robertson-Walker background, is equivalent in the non-covariant and covariant formulations of unimodular $F(R)$ theory.
Parametric Covariance Model for Horizon-Based Optical Navigation
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Graphing survival curve estimates for time-dependent covariates.
Schultz, Lonni R; Peterson, Edward L; Breslau, Naomi
2002-01-01
Graphical representation of statistical results is often used to assist readers in the interpretation of the findings. This is especially true for survival analysis where there is an interest in explaining the patterns of survival over time for specific covariates. For fixed categorical covariates, such as a group membership indicator, Kaplan-Meier estimates (1958) can be used to display the curves. For time-dependent covariates this method may not be adequate. Simon and Makuch (1984) proposed a technique that evaluates the covariate status of the individuals remaining at risk at each event time. The method takes into account the change in an individual's covariate status over time. The survival computations are the same as the Kaplan-Meier method, in that the conditional survival estimates are the function of the ratio of the number of events to the number at risk at each event time. The difference between the two methods is that the individuals at risk within each level defined by the covariate is not fixed at time 0 in the Simon and Makuch method as it is with the Kaplan-Meier method. Examples of how the two methods can differ for time dependent covariates in Cox proportional hazards regression analysis are presented.
Graphical representation of covariant-contravariant modal formulae
Directory of Open Access Journals (Sweden)
Miguel Palomino
2011-08-01
Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.
Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise
Directory of Open Access Journals (Sweden)
Dessimoz Christophe
2008-06-01
Full Text Available Abstract Background The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA. Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. Results In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. Conclusion The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity. Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.
Parsimonious covariate selection for a multicategory ordered response.
Hsu, Wan-Hsiang; DiRienzo, A Gregory
2017-12-01
We propose a flexible continuation ratio (CR) model for an ordinal categorical response with potentially ultrahigh dimensional data that characterizes the unique covariate effects at each response level. The CR model is the logit of the conditional discrete hazard function for each response level given covariates. We propose two modeling strategies, one that keeps the same covariate set for each hazard function but allows regression coefficients to arbitrarily change with response level, and one that allows both the set of covariates and their regression coefficients to arbitrarily change with response. Evaluating a covariate set is accomplished by using the nonparametric bootstrap to estimate prediction error and their robust standard errors that do not rely on proper model specification. To help with interpretation of the selected covariate set, we flexibly estimate the conditional cumulative distribution function given the covariates using the separate hazard function models. The goodness-of-fit of our flexible CR model is assessed with graphical and numerical methods based on the cumulative sum of residuals. Simulation results indicate the methods perform well in finite samples. An application to B-cell acute lymphocytic leukemia data is provided.
Covariate-adjusted measures of discrimination for survival data
DEFF Research Database (Denmark)
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...... for multi-variate systems to an ARMAV model. The covariance equivalent model structure is also considered when the number of channels are different from the number of degrees offreedom to be modelled. Finally, it is reviewed how to estimate an ARMAV model from sampled data....
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalized...... for multivariate systems to an ARMAV model. The covariance equivalent model structure is also considered when the number of channels are different from the number of degrees of freedom to be modelled. Finally, it is reviewed how to estimate an ARMAV model from sampled data....
Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds
Mitra, Arpita
2017-12-01
The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.
On the chiral covariant approach to ρρ scattering
Geng, Li-Sheng; Molina, Raquel; Oset, Eulogio
2017-12-01
We examine in detail a recent work (D. Gülmez, U. G. Meißner and J. A. Oller, Eur. Phys. J. C, 77: 460 (2017)), where improvements to make ρρ scattering relativistically covariant are made. The paper has the remarkable conclusion that the J=2 state disappears with a potential which is much more attractive than for J=0, where a bound state is found. We trace this abnormal conclusion to the fact that an “on-shell” factorization of the potential is done in a region where this potential is singular and develops a large discontinuous and unphysical imaginary part. A method is developed, evaluating the loops with full ρ propagators, and we show that they do not develop singularities and do not have an imaginary part below threshold. With this result for the loops we define an effective potential, which when used with the Bethe-Salpeter equation provides a state with J=2 around the energy of the f 2(1270). In addition, the coupling of the state to ρρ is evaluated and we find that this coupling and the T matrix around the energy of the bound state are remarkably similar to those obtained with a drastic approximation used previously, in which the q 2 terms of the propagators of the exchanged ρ mesons are dropped, once the cut-off in the ρρ loop function is tuned to reproduce the bound state at the same energy. Supported by National Natural Science Foundation of China (11375024, 11522539), the Spanish Ministerio de Economia y Competitividad and European FEDER funds (FIS2011-28853-C02-01, FIS2011- 28853-C02-02, FIS2014-57026-REDT, FIS2014-51948-C2- 1-P, FIS2014-51948-C2-2-P), the Generalitat Valenciana in the program Prometeo II-2014/068, We acknowledge the support of the European Community-Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (acronym HadronPhysics3, Grant Agreement n. 283286) under the Seventh Framework Programme of the EU
Data Selection for Within-Class Covariance Estimation
2016-09-08
Data Selection for Within- Class Covariance Estimation Elliot Singer1, Tyler Campbell,2, and Douglas Reynolds1 1 Massachusetts Institute of...Technology Lincoln Laboratory 2 Rensselaer Polytechnic Institute es@ll.mit.edu, tylercampbell@mac.com, dar@ll.mit.edu Abstract * Methods for performing...NIST evaluations to train the within- class and across- class covariance matrices required by these techniques, little attention has been paid to the
Covariant Noether charge for higher dimensional Chern-Simons terms
Energy Technology Data Exchange (ETDEWEB)
Azeyanagi, Tatsuo [Département de Physique, Ecole Normale Supérieure, CNRS,24 rue Lhomond, 75005 Paris (France); Loganayagam, R. [School of Natural Sciences, Institute for Advanced Study,1 Einstein Drive, Princeton, NJ 08540 (United States); Ng, Gim Seng [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St, Cambridge, MA 02138 (United States); Rodriguez, Maria J. [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St, Cambridge, MA 02138 (United States); Institut de Physique Théorique, Orme des Merisiers batiment 774,Point courrier 136, CEA/DSM/IPhT, CEA/Saclay,F-91191 Gif-sur-Yvette Cedex (France)
2015-05-07
We construct a manifestly covariant differential Noether charge for theories with Chern-Simons terms in higher dimensional spacetimes. This is in contrast to Tachikawa’s extension of the standard Lee-Iyer-Wald formalism which results in a non-covariant differential Noether charge for Chern-Simons terms. On a bifurcation surface, our differential Noether charge integrates to the Wald-like entropy formula proposed by Tachikawa in (arXiv:hep-th/0611141v2).
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Extreme Covariant Observables for Type I Symmetry Groups
Holevo, Alexander S.; Pellonpää, Juha-Pekka
2009-06-01
The structure of covariant observables—normalized positive operator measures (POMs)—is studied in the case of a type I symmetry group. Such measures are completely determined by kernels which are measurable fields of positive semidefinite sesquilinear forms. We produce the minimal Kolmogorov decompositions for the kernels and determine those which correspond to the extreme covariant observables. Illustrative examples of the extremals in the case of the Abelian symmetry group are given.
Modeling Portfolio Defaults using Hidden Markov Models with Covariates
Banachewicz, Konrad; van der Vaart, Aad; Lucas, André
2006-01-01
We extend the Hidden Markov Model for defaults of Crowder, Davis, and Giampieri (2005) to include covariates. The covariates enhance the prediction of transition probabilities from high to low default regimes. To estimate the model, we extend the EM estimating equations to account for the time varying nature of the conditional likelihoods due to sample attrition and extension. Using empirical U.S. default data, we find that GDP growth, the term structure of interest rates and stock market ret...
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2017-11-03
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computational protein design quantifies structural constraints on amino acid covariation.
Directory of Open Access Journals (Sweden)
Noah Ollikainen
Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.
Random sampling and validation of covariance matrices of resonance parameters
Plevnik, Lucijan; Zerovnik, Gašper
2017-09-01
Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Richert, Bertrand; Caucanas, Marie; André, Josette
2015-04-01
Diagnosing nail matrix diseases requires knowledge of the nail matrix function and anatomy. This allows recognition of the clinical manifestations and assessment of potential surgical risk. Nail signs depend on the location within the matrix (proximal or distal) and the intensity, duration, and extent of the insult. Proximal matrix involvement includes nail surface irregularities (longitudinal lines, transverse lines, roughness of the nail surface, pitting, and superficial brittleness), whereas distal matrix insult induces longitudinal or transverse chromonychia. Clinical signs are described and their main causes are listed to enable readers to diagnose matrix disease from the nail's clinical features. Copyright © 2015 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Petersen, Kaare Brandt; Pedersen, Michael Syskind
Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....
Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
Hossain, Shahadut; Gustafson, Paul
2009-05-15
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.
Matrix differentiation formulas
Usikov, D. A.; Tkhabisimov, D. K.
1983-01-01
A compact differentiation technique (without using indexes) is developed for scalar functions that depend on complex matrix arguments which are combined by operations of complex conjugation, transposition, addition, multiplication, matrix inversion and taking the direct product. The differentiation apparatus is developed in order to simplify the solution of extremum problems of scalar functions of matrix arguments.
Matrix with Prescribed Eigenvectors
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
Yousefi, Bardia; Sfarra, Stefano; Ibarra Castanedo, Clemente; Maldague, Xavier P. V.
2017-09-01
Thermal and infrared imagery creates considerable developments in Non-Destructive Testing (NDT) area. Here, a thermography method for NDT specimens inspection is addressed by applying a technique for computation of eigen-decomposition which refers as Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT). The proposed approach uses a shorter computational alternative to estimate covariance matrix and Singular Value Decomposition (SVD) to obtain the result of Principal Component Thermography (PCT) and ultimately segments the defects in the specimens applying color based K-medoids clustering approach. The problem of computational expenses for high-dimensional thermal image acquisition is also investigated. Three types of specimens (CFRP, Plexiglas and Aluminium) have been used for comparative benchmarking. The results conclusively indicate the promising performance and demonstrate a confirmation for the outlined properties.
Structural and Maturational Covariance in Early Childhood Brain Development.
Geng, Xiujuan; Li, Gang; Lu, Zhaohua; Gao, Wei; Wang, Li; Shen, Dinggang; Zhu, Hongtu; Gilmore, John H
2017-03-01
Brain structural covariance networks (SCNs) composed of regions with correlated variation are altered in neuropsychiatric disease and change with age. Little is known about the development of SCNs in early childhood, a period of rapid cortical growth. We investigated the development of structural and maturational covariance networks, including default, dorsal attention, primary visual and sensorimotor networks in a longitudinal population of 118 children after birth to 2 years old and compared them with intrinsic functional connectivity networks. We found that structural covariance of all networks exhibit strong correlations mostly limited to their seed regions. By Age 2, default and dorsal attention structural networks are much less distributed compared with their functional maps. The maturational covariance maps, however, revealed significant couplings in rates of change between distributed regions, which partially recapitulate their functional networks. The structural and maturational covariance of the primary visual and sensorimotor networks shows similar patterns to the corresponding functional networks. Results indicate that functional networks are in place prior to structural networks, that correlated structural patterns in adult may arise in part from coordinated cortical maturation, and that regional co-activation in functional networks may guide and refine the maturation of SCNs over childhood development. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Network-level structural covariance in the developing brain.
Zielinski, Brandon A; Gennatas, Efstathios D; Zhou, Juan; Seeley, William W
2010-10-19
Intrinsic or resting state functional connectivity MRI and structural covariance MRI have begun to reveal the adult human brain's multiple network architectures. How and when these networks emerge during development remains unclear, but understanding ontogeny could shed light on network function and dysfunction. In this study, we applied structural covariance MRI techniques to 300 children in four age categories (early childhood, 5-8 y; late childhood, 8.5-11 y; early adolescence, 12-14 y; late adolescence, 16-18 y) to characterize gray matter structural relationships between cortical nodes that make up large-scale functional networks. Network nodes identified from eight widely replicated functional intrinsic connectivity networks served as seed regions to map whole-brain structural covariance patterns in each age group. In general, structural covariance in the youngest age group was limited to seed and contralateral homologous regions. Networks derived using primary sensory and motor cortex seeds were already well-developed in early childhood but expanded in early adolescence before pruning to a more restricted topology resembling adult intrinsic connectivity network patterns. In contrast, language, social-emotional, and other cognitive networks were relatively undeveloped in younger age groups and showed increasingly distributed topology in older children. The so-called default-mode network provided a notable exception, following a developmental trajectory more similar to the primary sensorimotor systems. Relationships between functional maturation and structural covariance networks topology warrant future exploration.
Directory of Open Access Journals (Sweden)
Marcela Aparecida Guerreiro Machado
2008-01-01
Full Text Available Neste artigo é proposto, para o monitoramento de processos normais bivariados, um gráfico de controle baseado nas variâncias amostrais de duas características de qualidade. Os pontos plotados no gráfico correspondem ao valor da maior variância amostral. O gráfico proposto, denominado gráfico de VMAX, tem um desempenho superior ao do gráfico da variância amostral generalizada |S| e, além disso, tem uma melhor capacidade de diagnóstico, ou seja, com ele é mais fácil identificar a variável que teve sua variabilidade alterada pela ocorrência da causa especial. Quando a amostragem dupla está em uso o gráfico proposto também tem um desempenho superior ao do gráfico de |S|, exceto em alguns casos em que o tamanho da segunda amostra é muito grande.In this paper a control chart based on sample variances from two quality characteristics for monitoring bivariate normal processes is proposed. The points plotted on the chart are the maximum of the values of these two statistics. The proposed chart (VMAX chart detects process disturbances faster than the generalized variance |S| chart and has a better diagnostic feature, that is, with the VMAX chart it is easier to relate an out-of-control signal to the variable whose variability has moved away from its in-control state. When the double sampling scheme is used the proposed chart has also a better performance, except in a few cases in which the size of the samples at the second stage is very large.
Parce, J. Wallace; Bernatis, Paul; Dubrow, Robert; Freeman, William P.; Gamoras, Joel; Kan, Shihai; Meisel, Andreas; Qian, Baixin; Whiteford, Jeffery A.; Ziebarth, Jonathan
2010-01-12
Matrixes doped with semiconductor nanocrystals are provided. In certain embodiments, the semiconductor nanocrystals have a size and composition such that they absorb or emit light at particular wavelengths. The nanocrystals can comprise ligands that allow for mixing with various matrix materials, including polymers, such that a minimal portion of light is scattered by the matrixes. The matrixes of the present invention can also be utilized in refractive index matching applications. In other embodiments, semiconductor nanocrystals are embedded within matrixes to form a nanocrystal density gradient, thereby creating an effective refractive index gradient. The matrixes of the present invention can also be used as filters and antireflective coatings on optical devices and as down-converting layers. Processes for producing matrixes comprising semiconductor nanocrystals are also provided. Nanostructures having high quantum efficiency, small size, and/or a narrow size distribution are also described, as are methods of producing indium phosphide nanostructures and core-shell nanostructures with Group II-VI shells.
Directory of Open Access Journals (Sweden)
Janiga-Ćmiel Anna
2016-12-01
Full Text Available The paper looks at the issues related to the research on and assessment of the contagion effect. Based on several examinations of two selected EU countries, Poland paired with one of the EU member states; it presents the interaction between their economic development. A DCC-GARCH model constructed for the purpose of the study was used to generate a covariance matrix Ht, which enabled the calculation of correlation matrices Rt. The resulting variance vectors were used to present a linear correlation model on which a further analysis of the contagion effect was based. The aim of the study was to test a contagion effect among selected EU countries in the years 2000–2014. The transmission channel under study was the GDP of a selected country. The empirical studies confirmed the existence of the contagion effect between the economic development of the Polish and selected EU economies.
Optimal solution error covariance in highly nonlinear problems of variational data assimilation
Directory of Open Access Journals (Sweden)
V. Shutyaev
2012-03-01
Full Text Available The problem of variational data assimilation (DA for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition, boundary conditions and/or model parameters. The input data contain observation and background errors, hence there is an error in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can be approximated by the inverse Hessian of the cost function. For problems with strongly nonlinear dynamics, a new statistical method based on the computation of a sample of inverse Hessians is suggested. This method relies on the efficient computation of the inverse Hessian by means of iterative methods (Lanczos and quasi-Newton BFGS with preconditioning. Numerical examples are presented for the model governed by the Burgers equation with a nonlinear viscous term.
Femtosecond Studies Of Coulomb Explosion Utilizing Covariance Mapping
Card, D A
2000-01-01
The studies presented herein elucidate details of the Coulomb explosion event initiated through the interaction of molecular clusters with an intense femtosecond laser beam (≥1 PW/cm2). Clusters studied include ammonia, titanium-hydrocarbon, pyridine, and 7-azaindole. Covariance analysis is presented as a general technique to study the dynamical processes in clusters and to discern whether the fragmentation channels are competitive. Positive covariance determinations identify concerted processes such as the concomitant explosion of protonated cluster ions of asymmetrical size. Anti- covariance mapping is exploited to distinguish competitive reaction channels such as the production of highly charged nitrogen atoms formed at the expense of the protonated members of a cluster ion ensemble. This technique is exemplified in each cluster system studied. Kinetic energy analyses, from experiment and simulation, are presented to fully understand the Coulomb explosion event. A cutoff study strongly suggests that...
Disjunct eddy covariance technique for trace gas flux measurements
Rinne, H. J. I.; Guenther, A. B.; Warneke, C.; de Gouw, J. A.; Luxembourg, S. L.
A new approach for eddy covariance flux measurements is developed and applied for trace gas fluxes in the atmospheric surface layer. In disjunct eddy covariance technique, quick samples with a relatively long time interval between them are taken instead of continuously sampling air. This subset of the time series together with vertical wind velocity data at corresponding sampling times can be correlated to give a flux. The disjunct eddy sampling gives more time to analyze the trace gas concentrations and thus makes eddy covariance measurements possible using slower sensors. In this study a proton-transfer-reaction mass spectrometer with response time of about 1 second was used with a disjunct eddy sampler to measure fluxes of volatile organic compounds from an alfalfa field. The measured day-time maximum methanol fluxes ranged from 1 mg m-2 h-1 from uncut alfalfa to 8 mg m-2 h-1 from freshly cut alfalfa. Night-time fluxes were around zero.
Structure of irreducibly covariant quantum channels for finite groups
Mozrzymas, Marek; Studziński, Michał; Datta, Nilanjana
2017-05-01
We obtain an explicit characterization of linear maps, in particular, quantum channels, which are covariant with respect to an irreducible representation (U) of a finite group (G), whenever U ⊗Uc is simply reducible (with Uc being the contragradient representation). Using the theory of group representations, we obtain the spectral decomposition of any such linear map. The eigenvalues and orthogonal projections arising in this decomposition are expressed entirely in terms of representation characteristics of the group G. This in turn yields necessary and sufficient conditions on the eigenvalues of any such linear map for it to be a quantum channel. We also obtain a wide class of quantum channels which are irreducibly covariant by construction. For two-dimensional irrreducible representations of the symmetric group S(3), and the quaternion group Q, we also characterize quantum channels which are both irreducibly covariant and entanglement breaking.
Piecewise exponential survival trees with time-dependent covariates.
Huang, X; Chen, S; Soong, S J
1998-12-01
Survival trees methods are nonparametric alternatives to the semiparametric Cox regression in survival analysis. In this paper, a tree-based method for censored survival data with time-dependent covariates is proposed. The proposed method assumes a very general model for the hazard function and is fully nonparametric. The recursive partitioning algorithm uses the likelihood estimation procedure to grow trees under a piecewise exponential structure that handles time-dependent covariates in a parallel way to time-independent covariates. In general, the estimated hazard at a node gives the risk for a group of individuals during a specific time period. Both cross-validation and bootstrap resampling techniques are implemented in the tree selection procedure. The performance of the proposed survival trees method is shown to be good through simulation and application to real data.
Covariate selection for the semiparametric additive risk model
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared with the proport......This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...
A Lorentz-Covariant Connection for Canonical Gravity
Directory of Open Access Journals (Sweden)
Marc Geiller
2011-08-01
Full Text Available We construct a Lorentz-covariant connection in the context of first order canonical gravity with non-vanishing Barbero-Immirzi parameter. To do so, we start with the phase space formulation derived from the canonical analysis of the Holst action in which the second class constraints have been solved explicitly. This allows us to avoid the use of Dirac brackets. In this context, we show that there is a ''unique'' Lorentz-covariant connection which is commutative in the sense of the Poisson bracket, and which furthermore agrees with the connection found by Alexandrov using the Dirac bracket. This result opens a new way toward the understanding of Lorentz-covariant loop quantum gravity.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Random sampling and validation of covariance matrices of resonance parameters
Directory of Open Access Journals (Sweden)
Plevnik Lucijan
2017-01-01
Full Text Available Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Non-negative matrix factorization with Gaussian process priors
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Laurberg, Hans
2008-01-01
We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors. We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance...... function. This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries. The method is demonstrated with an example from chemical shift brain imaging....
The Geometry of Statistical Efficiency and Matrix Statistics
Directory of Open Access Journals (Sweden)
K. Gustafson
2007-01-01
Full Text Available We will place certain parts of the theory of statistical efficiency into the author's operator trigonometry (1967, thereby providing new geometrical understanding of statistical efficiency. Important earlier results of Bloomfield and Watson, Durbin and Kendall, Rao and Rao, will be so interpreted. For example, worse case relative least squares efficiency corresponds to and is achieved by the maximal turning antieigenvectors of the covariance matrix. Some little-known historical perspectives will also be exposed. The overall view will be emphasized.
Buckley-James Estimator of AFT Models with Auxiliary Covariates
Granville, Kevin; Fan, Zhaozhi
2014-01-01
In this paper we study the Buckley-James estimator of accelerated failure time models with auxiliary covariates. Instead of postulating distributional assumptions on the auxiliary covariates, we use a local polynomial approximation method to accommodate them into the Buckley-James estimating equations. The regression parameters are obtained iteratively by minimizing a consecutive distance of the estimates. Asymptotic properties of the proposed estimator are investigated. Simulation studies show that the efficiency gain of using auxiliary information is remarkable when compared to just using the validation sample. The method is applied to the PBC data from the Mayo Clinic trial in primary biliary cirrhosis as an illustration. PMID:25127479
Spatial implications of covariate adjustment on patterns of risk
DEFF Research Database (Denmark)
Sabel, Clive Eric; Wilson, Jeff Gaines; Kingham, Simon
2007-01-01
localised factors that influence the exposure-response relationship. This paper examines the spatial patterns of relative risk and clusters of hospitalisations based on an illustrative small-area example from Christchurch, New Zealand. A four-stage test of the spatial relocation effects of covariate...... area to a mixed residential/industrial area, possibly introducing new environmental exposures. Researchers should be aware of the potential spatial effects inherent in adjusting for covariates when considering study design and interpreting results. © 2007 Elsevier Ltd. All rights reserved....
Fission yield covariances for JEFF: A Bayesian Monte Carlo method
Leray, Olivier; Rochman, Dimitri; Fleming, Michael; Sublet, Jean-Christophe; Koning, Arjan; Vasiliev, Alexander; Ferroukhi, Hakim
2017-09-01
The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties) and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
Fission yield covariances for JEFF: A Bayesian Monte Carlo method
Directory of Open Access Journals (Sweden)
Leray Olivier
2017-01-01
Full Text Available The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
Portfolio management using realized covariances: Evidence from Brazil
Directory of Open Access Journals (Sweden)
João F. Caldeira
2017-09-01
Full Text Available It is often argued that intraday returns can be used to construct covariance estimates that are more accurate than those based on daily returns. However, it is still unclear whether high frequency data provide more precise covariance estimates in markets more contaminated from microstructure noise such as higher bid-ask spreads and lower liquidity. We address this question by investigating the benefits of using high frequency data in the Brazilian equities market to construct optimal minimum variance portfolios. We implement alternative realized covariance estimators based on intraday returns sampled at alternative frequencies and obtain their dynamic versions using a multivariate GARCH framework. Our evidence based on a high-dimensional data set suggests that realized covariance estimators performed significantly better from an economic point of view in comparison to standard estimators based on low-frequency (close-to-close data as they delivered less risky portfolios. Resumo: Argumenta-se frequentemente que retornos intradiários podem ser usados para construir estimativas de covariâncias mais precisas em relação àquelas obtidas com retornos diários. No entanto, ainda não está claro se os dados de alta freqüência fornecem estimativas de covariância mais precisas em mercados mais contaminados pelo ruído da microestrutura, como maiores spreads entre ofertas de compra e venda e baixa liquidez. Abordamos essa questão investigando os benefícios do uso de dados de alta freqüência no mercado de ações brasileiro através da construção de portfólios ótimos de variância mínima. Implementamos diversos estimadores de covariâncias realizadas com base em retornos intradiários amostrados em diferentes frequências e obtemos suas versões dinâmicas usando uma estrutura GARCH multivariada. Nossa evidência baseada em um conjunto de dados de alta dimensão sugere que os estimadores de covariâncias realizadas obtiveram um desempenho
Covariant description of transformation optics in nonlinear media.
Paul, Oliver; Rahm, Marco
2012-04-09
The technique of transformation optics (TO) is an elegant method for the design of electromagnetic media with tailored optical properties. In this paper, we focus on the formal structure of TO theory. By using a complete covariant formalism, we present a general transformation law that holds for arbitrary materials including bianisotropic, magneto-optical, nonlinear and moving media. Due to the principle of general covariance, the formalism is applicable to arbitrary space-time coordinate transformations and automatically accounts for magneto-electric coupling terms. The formalism is demonstrated for the calculation of the second harmonic wave generation in a twisted TO concentrator.
Dirac oscillator in a Galilean covariant non-commutative space
Energy Technology Data Exchange (ETDEWEB)
Melo, G.R. de [Universidade Federal do Reconcavo da Bahia, BA (Brazil); Montigny, M. [University of Alberta (Canada); Pompeia, P.J. [Instituto de Fomento e Coordecacao Industrial, Sao Jose dos Campos, SP (Brazil); Santos, Esdras S. [Universidade Federal da Bahia, Salvador (Brazil)
2013-07-01
Full text: Even though Galilean kinematics is only an approximation of the relativistic kinematics, the structure of Galilean kinematics is more intricate than relativistic kinematics. For instance, the Galilean algebra admits a nontrivial central extension and projective representations, whereas the Poincare algebra does not. It is possible to construct representations of the Galilei algebra with three possible methods: (1) directly from the Galilei algebra, (2) from contractions of the Poincare algebra with the same space-time dimension, or (3) from the Poincare algebra in a space-time with one additional dimension. In this paper, we follow the third approach, which we refer to as 'Galilean covariance' because the equations are Lorentz covariant in the extended manifold. These equations become Galilean invariant after projection to the lower dimension. Our motivation is that this covariant approach provides one more unifying feature of field theory models. Indeed, particle physics (with Poincare kinematics) and condensed matter physics (with Galilean kinematics) share many tools of quantum field theory (e.g. gauge invariance, spontaneous symmetry breaking, Goldstone bosons), but the Galilean kinematics does not admit a metric structure. However, since the Galilean Lie algebra is a subalgebra of the Poincare Lie algebra if one more space-like dimension is added, we can achieve 'Galilean covariance' with a metric in an extended manifold; that makes non-relativistic models look similar to Lorentz-covariant relativistic models. In this context we study the Galilei covariant five-dimensional formulation applied to Galilean Dirac oscillator in a non-commutative situation, with space-space and momentum-momentum non-commutativity. The wave equation is obtained via a 'Galilean covariant' approach, which consists in projecting the covariant motion equations from a (4, l)-dimensional manifold with light-cone coordinates, to a (3, l
Random matrix theory for heavy-tailed time series
DEFF Research Database (Denmark)
Heiny, Johannes
2017-01-01
This paper is a review of recent results for large random matrices with heavy-tailed entries. First, we outline the development of and some classical results in random matrix theory. We focus on large sample covariance matrices, their limiting spectral distributions, the asymptotic behavior...... of their largest and smallest eigenvalues and their eigenvectors. The limits significantly depend on the finite or infiniteness of the fourth moment of the entries of the random matrix. We compare the results for these two regimes which give rise to completely different asymptotic theories. Finally, the limits...
Energy Technology Data Exchange (ETDEWEB)
Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-01-10
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Berrier, Allison L; Yamada, Kenneth M
2007-12-01
The complex interactions of cells with extracellular matrix (ECM) play crucial roles in mediating and regulating many processes, including cell adhesion, migration, and signaling during morphogenesis, tissue homeostasis, wound healing, and tumorigenesis. Many of these interactions involve transmembrane integrin receptors. Integrins cluster in specific cell-matrix adhesions to provide dynamic links between extracellular and intracellular environments by bi-directional signaling and by organizing the ECM and intracellular cytoskeletal and signaling molecules. This mini review discusses these interconnections, including the roles of matrix properties such as composition, three-dimensionality, and porosity, the bi-directional functions of cellular contractility and matrix rigidity, and cell signaling. The review concludes by speculating on the application of this knowledge of cell-matrix interactions in the formation of cell adhesions, assembly of matrix, migration, and tumorigenesis to potential future therapeutic approaches. 2007 Wiley-Liss, Inc.
Parallelism in matrix computations
Gallopoulos, Efstratios; Sameh, Ahmed H
2016-01-01
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...
Quasiclassical Random Matrix Theory
Prange, R. E.
1996-01-01
We directly combine ideas of the quasiclassical approximation with random matrix theory and apply them to the study of the spectrum, in particular to the two-level correlator. Bogomolny's transfer operator T, quasiclassically an NxN unitary matrix, is considered to be a random matrix. Rather than rejecting all knowledge of the system, except for its symmetry, [as with Dyson's circular unitary ensemble], we choose an ensemble which incorporates the knowledge of the shortest periodic orbits, th...
Type-Safe Compilation of Covariant Specialization: A Practical Case
1995-11-01
modify the semantics of languages that use covariant specialization in order to improve their type safety. We demonstrate our technique using O2, a...not affect the semantics of those computations without type errors. Furthermore, the new semantics of the previously ill-typed computations is defined
Pseudo-observations for competing risks with covariate dependent censoring
DEFF Research Database (Denmark)
Binder, Nadine; Gerds, Thomas A; Andersen, Per Kragh
2014-01-01
that the probability of not being lost to follow-up (un-censored) is independent of the covariates. Modified pseudo-values are proposed which rely on a correctly specified regression model for the censoring times. Bias and efficiency of these methods are compared in a simulation study. Further illustration...
Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)
DEFF Research Database (Denmark)
Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis
We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...
Application of covariance analysis to feed/ ration experimental data ...
African Journals Online (AJOL)
Correlation and Regression analyses were used to adjust for the covariate – initial weight of the experimental birds. The Fisher's F statistic for the straight forward Analysis of Variance (ANOVA) showed significant differences among the rations. With the ANOVA, the calculated F statistic was 4.025, with a probability of 0.0149.
Globally covering a-priori regional gravity covariance models
Directory of Open Access Journals (Sweden)
D. Arabelos
2003-01-01
Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`
Eddy covariance based methane flux in Sundarbans mangroves, India
Indian Academy of Sciences (India)
West Bengal Forest Department, Salt Lake, Kolkata 700 098, India. ∗. Corresponding author. e-mail: surajking123@gmail.com ... Eddy covariance; mangrove forests; methane flux; Sundarbans. J. Earth Syst. Sci. 123, No. 5, July 2014, pp. 1089–1096 .... terrestrial biomes in India. The main objective of this paper is to present.
Do more detailed environmental covariates deliver more accurate soil maps?
Samuel Rosa, A.; Heuvelink, G.B.M.; Vasques, G.M.; Anjos, L.H.C.
2015-01-01
In this study we evaluated whether investing in more spatially detailed environmental covariates improves the accuracy of digital soil maps. We used a case study from Southern Brazil to map clay content (CLAY), organic carbon content (SOC), and effective cation exchange capacity (ECEC) of the
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Reproducibility of regional metabolic covariance patterns : Comparison of four populations
Moeller, [No Value; Nakamura, T; Mentis, MJ; Dhawan, [No Value; Spetsieres, P; Antonini, A; Missimer, J; Leenders, KL; Eidelberg, D
In a previous [F-18]fluorodeoxyglucose (FDG) PET study we analyzed regional metabolic data from a combined group of Parkinson's disease (PD) patients and healthy volunteers (N), using network analysis. By this method, we identified a unique pattern of regional metabolic covariation with an
Covariation of spectral and nonlinear EEG measures with alpha biofeedback.
Fell, J.; Elfadil, H.; Klaver, P.; Roschke, J.; Elger, C.E.; Fernandez, G.S.E.
2002-01-01
This study investigated how different spectral and nonlinear EEG measures covaried with alpha power during auditory alpha biofeedback training, performed by 13 healthy subjects. We found a significant positive correlation of alpha power with the largest Lyapunov-exponent, pointing to an increased
RNA search with decision trees and partial covariance models.
Smith, Jennifer A
2009-01-01
The use of partial covariance models to search for RNA family members in genomic sequence databases is explored. The partial models are formed from contiguous subranges of the overall RNA family multiple alignment columns. A binary decision-tree framework is presented for choosing the order to apply the partial models and the score thresholds on which to make the decisions. The decision trees are chosen to minimize computation time subject to the constraint that all of the training sequences are passed to the full covariance model for final evaluation. Computational intelligence methods are suggested to select the decision tree since the tree can be quite complex and there is no obvious method to build the tree in these cases. Experimental results from seven RNA families shows execution times of 0.066-0.268 relative to using the full covariance model alone. Tests on the full sets of known sequences for each family show that at least 95 percent of these sequences are found for two families and 100 percent for five others. Since the full covariance model is run on all sequences accepted by the partial model decision tree, the false alarm rate is at least as low as that of the full model alone.
Unified Approach to Universal Cloning and Phase-Covariant Cloning
Hu, Jia-Zhong; Yu, Zong-Wen; Wang, Xiang-Bin
2008-01-01
We analyze the problem of approximate quantum cloning when the quantum state is between two latitudes on the Bloch's sphere. We present an analytical formula for the optimized 1-to-2 cloning. The formula unifies the universal quantum cloning (UQCM) and the phase covariant quantum cloning.
Proton-proton virtual bremsstrahlung in a relativistic covariant model
Martinus, GH; Scholten, O; Tjon, J
1999-01-01
Lepton-pair production (virtual bremsstrahlung) in proton-proton scattering is investigated using a relativistic covariant model. The effects of negative-energy slates and two-body currents are studied. These are shown to have large effects in some particular structure functions, even at the
Covariant Structure of Models of Geophysical Fluid Motion
Dubos, Thomas
2018-01-01
Geophysical models approximate classical fluid motion in rotating frames. Even accurate approximations can have profound consequences, such as the loss of inertial frames. If geophysical fluid dynamics are not strictly equivalent to Newtonian hydrodynamics observed in a rotating frame, what kind of dynamics are they? We aim to clarify fundamental similarities and differences between relativistic, Newtonian, and geophysical hydrodynamics, using variational and covariant formulations as tools to shed the necessary light. A space-time variational principle for the motion of a perfect fluid is introduced. The geophysical action is interpreted as a synchronous limit of the relativistic action. The relativistic Levi-Civita connection also has a finite synchronous limit, which provides a connection with which to endow geophysical space-time, generalizing Cartan (1923). A covariant mass-momentum budget is obtained using covariance of the action and metric-preserving properties of the connection. Ultimately, geophysical models are found to differ from the standard compressible Euler model only by a specific choice of a metric-Coriolis-geopotential tensor akin to the relativistic space-time metric. Once this choice is made, the same covariant mass-momentum budget applies to Newtonian and all geophysical hydrodynamics, including those models lacking an inertial frame. Hence, it is argued that this mass-momentum budget provides an appropriate, common fundamental principle of dynamics. The postulate that Euclidean, inertial frames exist can then be regarded as part of the Newtonian theory of gravitation, which some models of geophysical hydrodynamics slightly violate.
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
Covariance Structure Models for Gene Expression Microarray Data
Xie, Jun; Bentler, Peter M.
2003-01-01
Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…
From covariant to canonical formulations of discrete gravity
Dittrich, B.; Höhn, P.A.
2010-01-01
Starting from an action for discretized gravity, we derive a canonical formalism that exactly reproduces the dynamics and (broken) symmetries of the covariant formalism. For linearized Regge calculus on a flat background—which exhibits exact gauge symmetries—we derive local and first-class
Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.
Song, Xiao; Wang, Ching-Yun
2014-12-01
In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.
Covariation of Color and Luminance Facilitate Object Individuation in Infancy
Woods, Rebecca J.; Wilcox, Teresa
2010-01-01
The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…
Analysis of Covariance and Randomized Block Design with Heterogeneous Slopes.
Klockars, Alan J.; Beretvas, S. Natasha
2001-01-01
Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)
Scale-dependent background-error covariance localisation
Directory of Open Access Journals (Sweden)
Mark Buehner
2015-12-01
Full Text Available A new approach is presented and evaluated for efficiently applying scale-dependent spatial localisation to ensemble background-error covariances within an ensemble-variational data assimilation system. The approach is primarily motivated by the requirements of future data assimilation systems for global numerical weather prediction that will be capable of resolving the convective scale. Such systems must estimate the global and synoptic scales at least as well as current global systems while also effectively making use of information from frequent and spatially dense observation networks to constrain convective-scale features. Scale-dependent covariance localisation allows a wider range of scales to be efficiently estimated while simultaneously assimilating all available observations. In the context of an idealised numerical experiment, it is shown that using scale-dependent localisation produces an improved ensemble-based estimate of spatially varying covariances as compared with standard spatial localisation. When applied to an ensemble of Arctic sea-ice concentration, it is demonstrated that strong spatial gradients in the relative contribution of different spatial scales in the ensemble covariances result in strong spatial variations in the overall amount of spatial localisation. This feature is qualitatively similar to what might be expected when applying an adaptive localisation approach that estimates a spatially varying localisation function from the ensemble itself. When compared with standard spatial localisation, scale-dependent localisation also results in a lower analysis error for sea-ice concentration over all spatial scales.
An alternative covariance estimator to investigate genetic heterogeneity in populations
Genomic predictions and GWAS have used mixed models for identification of associations and trait predictions. In both cases, the covariance between individuals for performance is estimated using molecular markers. Mixed model properties indicate that the use of the data for prediction is optimal if ...
Some observations on interpolating gauges and non-covariant gauges
Indian Academy of Sciences (India)
tion that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. Keywords. Non-covariant gauges; interpolating ...
Vector and matrix states for Mueller matrices of nondepolarizing optical media.
Kuntman, Ertan; Ali Kuntman, M; Arteaga, Oriol
2017-01-01
Nondepolarizing Mueller matrices contain up to seven independent parameters. However, these seven parameters typically do not appear explicitly among the measured 16 parameters of a Mueller matrix, so that they are not directly accessible for physical interpretation. This work shows that all the information contained in a nondepolarizing Mueller matrix can be conveniently expressed in terms of a four component covariance vector state or a generating 4×4 matrix, which can be understood as a matrix state. The generating matrix, besides being directly related to the nondepolarizing Mueller matrix, mimics all properties of the Jones matrix and provides a powerful mathematical tool for formulating all properties of nondepolarizing systems, including the Mueller symmetries and the anisotropy coefficients.
Resonance parameter and covariance evaluation for 16O up to 6 MeV
Directory of Open Access Journals (Sweden)
Leal Luiz
2016-01-01
Full Text Available A resolved resonance evaluation was performed for 16O in the energy range 0 eV to 6 MeV using the computer code SAMMY resulting in a set of resonance parameters (RPs that describes well the experimental data used in the evaluation. A RP covariance matrix (RPC was also generated. The RP were converted to the evaluated nuclear data file format using the R-Matrix Limited format and the compact format was used to represent the RPC. In contrast to the customary use of RP, which are frequently intended for the generation of total, capture, and scattering cross sections only, the present RP evaluation permits the computation of angle dependent cross sections. Furthermore, the RPs are capable of representing the (n, α cross section from the energy threshold (2.354 MeV of the (n, α reaction to 6 MeV. The intent of this paper is to describe the procedures used in the evaluation of the RP and RPC, the use of the RPC in benchmark calculations and to assess the impact of the 16O nuclear data uncertainties in the calculate dkeff for critical benchmark experiments.
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.
2013-01-01
For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...
Covariance Functions and Random Regression Models in the ...
African Journals Online (AJOL)
ARC-IRENE
Since its inception the application of genetic principles to selective breeding of farm animals has led ... animal increases in size or weight continuously over time until reaching a plateau at maturity. Such a process .... where A and I are the numerator relationship matrix and an identity matrix, respectively; KG and KC are the.
Optimization of Data Requests Timing by Working with Matrixes under MSAccess Environment
Directory of Open Access Journals (Sweden)
Alexandru ATOMEI
2010-09-01
Full Text Available This paper is going to emphasize an optimised code in order to manage matrix calculus under MSAccess. The economic impact of using such a method is the optimal cost-benefit solution, and optimised timing for data management. As well, matrix calculus is the base of Variance-Covariance method used by financial corporations as an advanced method for estimation of market risk movements with direct impact over the capital required by prudential bodies.
Markowski, Adam S; Mannan, M Sam
2008-11-15
A risk matrix is a mechanism to characterize and rank process risks that are typically identified through one or more multifunctional reviews (e.g., process hazard analysis, audits, or incident investigation). This paper describes a procedure for developing a fuzzy risk matrix that may be used for emerging fuzzy logic applications in different safety analyses (e.g., LOPA). The fuzzification of frequency and severity of the consequences of the incident scenario are described which are basic inputs for fuzzy risk matrix. Subsequently using different design of risk matrix, fuzzy rules are established enabling the development of fuzzy risk matrices. Three types of fuzzy risk matrix have been developed (low-cost, standard, and high-cost), and using a distillation column case study, the effect of the design on final defuzzified risk index is demonstrated.
Directory of Open Access Journals (Sweden)
Mauricio Valenzuela
2017-10-01
Full Text Available We propose a hybrid class of theories for higher spin gravity and matrix models, i.e., which handle simultaneously higher spin gravity fields and matrix models. The construction is similar to Vasiliev’s higher spin gravity, but part of the equations of motion are provided by the action principle of a matrix model. In particular, we construct a higher spin (gravity matrix model related to type IIB matrix models/string theory that have a well defined classical limit, and which is compatible with higher spin gravity in A d S space. As it has been suggested that higher spin gravity should be related to string theory in a high energy (tensionless regime, and, therefore to M-Theory, we expect that our construction will be useful to explore concrete connections.
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative
Quantum corrections for the cubic Galileon in the covariant language
Saltas, Ippocratis D.; Vitagliano, Vincenzo
2017-05-01
We present for the first time an explicit exposition of quantum corrections within the cubic Galileon theory including the effect of quantum gravity, in a background- and gauge-invariant manner, employing the field-reparametrisation approach of the covariant effective action at 1-loop. We show that the consideration of gravitational effects in combination with the non-linear derivative structure of the theory reveals new interactions at the perturbative level, which manifest themselves as higher-operators in the associated effective action, which' relevance is controlled by appropriate ratios of the cosmological vacuum and the Galileon mass scale. The significance and concept of the covariant approach in this context is discussed, while all calculations are explicitly presented.
Visual Representations Of Non-Separable Spatiotemporal Covariance Models
Kolovos, A.; Christakos, G.; Hristopulos, D. T.; Serre, M. L.
2003-12-01
Natural processes that relate to climatic variability (such as air circulation, air-water and air-soil energy exchanges) contain inherently stochastic components. Spatiotemporal random fields are frequently employed to model such processes and deal with the uncertainty involved. Covariance functions are statistical tools that are used to express correlations between process values across space and time. This work focuses on a review and visual representation of a series of useful covariance models that have been introduced in the Modern Spatiotemporal Geostatistics literature. Some of their important features are examined and their application can significantly improve the interpretation of space/time correlations that affect the long-term climatic evolution both on a local or a global scale.
Scale-covariant theory of gravitation and astrophysical applications
Canuto, V.; Adams, P. J.; Hsieh, S.-H.; Tsiang, E.
1977-01-01
A scale-covariant theory of gravitation is presented which is characterized by a set of equations that are complete only after a choice of the scale function is made. Special attention is given to gauge conditions and units which allow gravitational phenomena to be described in atomic units. The generalized gravitational-field equations are derived by performing a direct scale transformation, by extending Riemannian geometry to Weyl geometry through the introduction of the notion of cotensors, and from a variation principle. Modified conservation laws are provided, a set of dynamical equations is obtained, and astrophysical consequences are considered. The theory is applied to examine certain homogeneous cosmological solutions, perihelion shifts, light deflections, secular variations of planetary orbital elements, stellar structure equations for a star in quasi-static equilibrium, and the past thermal history of earth. The possible relation of the scale-covariant theory to gauge field theories and their predictions of cosmological constants is discussed.
Flavour covariant transport equations: An application to resonant leptogenesis
Directory of Open Access Journals (Sweden)
P.S. Bhupal Dev
2014-09-01
Full Text Available We present a fully flavour-covariant formalism for transport phenomena, by deriving Markovian master equations that describe the time-evolution of particle number densities in a statistical ensemble with arbitrary flavour content. As an application of this general formalism, we study flavour effects in a scenario of resonant leptogenesis (RL and obtain the flavour-covariant evolution equations for heavy-neutrino and lepton number densities. This provides a complete and unified description of RL, capturing three distinct physical phenomena: (i the resonant mixing between the heavy-neutrino states, (ii coherent oscillations between different heavy-neutrino flavours, and (iii quantum decoherence effects in the charged-lepton sector. To illustrate the importance of this formalism, we numerically solve the flavour-covariant rate equations for a minimal RL model and show that the total lepton asymmetry can be enhanced by up to one order of magnitude, as compared to that obtained from flavour-diagonal or partially flavour off-diagonal rate equations. Thus, the viable RL model parameter space is enlarged, thereby enhancing further the prospects of probing a common origin of neutrino masses and the baryon asymmetry in the Universe at the LHC, as well as in low-energy experiments searching for lepton flavour and number violation. The key new ingredients in our flavour-covariant formalism are rank-4 rate tensors, which are required for the consistency of our flavour-mixing treatment, as shown by an explicit calculation of the relevant transition amplitudes by generalizing the optical theorem. We also provide a geometric and physical interpretation of the heavy-neutrino degeneracy limits in the minimal RL scenario. Finally, we comment on the consistency of various suggested forms for the heavy-neutrino self-energy regulator in the lepton-number conserving limit.
Treatment of Nuclear Data Covariance Information in Sample Generation
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wieselquist, William [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division
2017-10-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Parkinson's disease spatial covariance pattern: noninvasive quantification with perfusion MRI
Ma, Yilong; Huang, Chaorui; Dyke, Jonathan P.; Pan, Hong; Alsop, David; Feigin, Andrew; Eidelberg, David
2010-01-01
Parkinson's disease (PD) is associated with elevated expression of a specific disease-related spatial covariance pattern (PDRP) in radiotracer scans of cerebral blood flow and metabolism. In this study, we scanned nine early-stage patients with PD and nine healthy controls using continuous arterial spin labeling (CASL) perfusion magnetic resonance imaging (pMRI). Parkinson's disease-related metabolic pattern expression in CASL pMRI scans was compared with the corresponding 18F-fluorodeoxygluc...
Hydrodynamic Covariant Symplectic Structure from Bilinear Hamiltonian Functions
Directory of Open Access Journals (Sweden)
Capozziello S.
2005-07-01
Full Text Available Starting from generic bilinear Hamiltonians, constructed by covariant vector, bivector or tensor fields, it is possible to derive a general symplectic structure which leads to holonomic and anholonomic formulations of Hamilton equations of motion directly related to a hydrodynamic picture. This feature is gauge free and it seems a deep link common to all interactions, electromagnetism and gravity included. This scheme could lead toward a full canonical quantization.
Propagation of nuclear data uncertainty: Exact or with covariances
Directory of Open Access Journals (Sweden)
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
A geometric rationale for invariance, covariance and constitutive relations
Romano, Giovanni; Barretta, Raffaele; Diaco, Marina
2017-09-01
There are, in each branch of science, statements which, expressed in ambiguous or even incorrect but seemingly friendly manner, were repeated for a long time and eventually became diffusely accepted. Objectivity of physical fields and of their time rates and frame indifference of constitutive relations are among such notions. A geometric reflection on the description of frame changes as spacetime automorphisms, on induced push-pull transformations and on proper physico-mathematical definitions of material, spatial and spacetime tensor fields and of their time-derivatives along the motion, is here carried out with the aim of pointing out essential notions and of unveiling false claims. Theoretical and computational aspects of nonlinear continuum mechanics, and especially those pertaining to constitutive relations, involving material fields and their time rates, gain decisive conceptual and operative improvement from a proper geometric treatment. Outcomes of the geometric analysis are frame covariance of spacetime velocity, material stretching and material spin. A univocal and frame-covariant tool for evaluation of time rates of material fields is provided by the uc(Lie) derivative along the motion. The postulate of frame covariance of material fields is assessed to be a natural physical requirement which cannot interfere with the formulation of constitutive laws, with claims of the contrary stemming from an improper imposition of equality in place of equivalence.
Modeling conditional covariance between meteorological and hydrological drought
Modarres, R.
2012-12-01
This study introduces a bivariate Generalized Autoregressive Conditional Heteroscedasticity (GARCH) approach to model the time varying second order moment or conditional variance-covariance link of hydrologic and meteorological drought. The standardized streamflow and rainfall time series are selected as drought indices and the bivariate diagonal BEKK model is applied to estimate the conditional variance-covariance structure between hydrologic and meteorological drought. Results of diagonal BEKK(1,1) model indicated that the conditional variance of meteorological drought is weak and much smaller than that for hydrological drought which shows a strong volatility effect. However both drought indices show a weak memory in the conditional variance. It is also observed that the conditional covariance between two drought indices is also weak and only shows a slight short run volatility effect. This may suggest the effect of basin features such as groundwater storage and physical characteristics which attenuate and modify the effect of meteorological drought on hydrologic drought in the basin scale. conditional correlation time series between meteorological and hydrologic drought at two selected stations monthly variation of conditional correlation between meteorological and hydrologic drought at two selected stations
Hyperbolic Covariant Coherent Structures in Two Dimensional Flows
Directory of Open Access Journals (Sweden)
Giovanni Conti
2017-09-01
Full Text Available A new method to describe hyperbolic patterns in two-dimensional flows is proposed. The method is based on the Covariant Lyapunov Vectors (CLVs, which have the properties of being covariant with the dynamics, and thus, being mapped by the tangent linear operator into another CLVs basis, they are norm independent, invariant under time reversal and cannot be orthonormal. CLVs can thus give more detailed information about the expansion and contraction directions of the flow than the Lyapunov vector bases, which are instead always orthogonal. We suggest a definition of Hyperbolic Covariant Coherent Structures (HCCSs, which can be defined on the scalar field representing the angle between the CLVs. HCCSs can be defined for every time instant and could be useful to understand the long-term behavior of particle tracers. We consider three examples: a simple autonomous Hamiltonian system, as well as the non-autonomous “double gyre” and Bickley jet, to see how well the angle is able to describe particular patterns and barriers. We compare the results from the HCCSs with other coherent patterns defined on finite time by the Finite Time Lyapunov Exponents (FTLEs, to see how the behaviors of these structures change asymptotically.
Estimation of Fuzzy Measures Using Covariance Matrices in Gaussian Mixtures
Directory of Open Access Journals (Sweden)
Nishchal K. Verma
2012-01-01
Full Text Available This paper presents a novel computational approach for estimating fuzzy measures directly from Gaussian mixtures model (GMM. The mixture components of GMM provide the membership functions for the input-output fuzzy sets. By treating consequent part as a function of fuzzy measures, we derived its coefficients from the covariance matrices found directly from GMM and the defuzzified output constructed from both the premise and consequent parts of the nonadditive fuzzy rules that takes the form of Choquet integral. The computational burden involved with the solution of λ-measure is minimized using Q-measure. The fuzzy model whose fuzzy measures were computed using covariance matrices found in GMM has been successfully applied on two benchmark problems and one real-time electric load data of Indian utility. The performance of the resulting model for many experimental studies including the above-mentioned application is found to be better and comparable to recent available fuzzy models. The main contribution of this paper is the estimation of fuzzy measures efficiently and directly from covariance matrices found in GMM, avoiding the computational burden greatly while learning them iteratively and solving polynomial equations of order of the number of input-output variables.
Bilinear covariants and spinor fields duality in quantum Clifford algebras
Energy Technology Data Exchange (ETDEWEB)
Abłamowicz, Rafał, E-mail: rablamowicz@tntech.edu [Department of Mathematics, Box 5054, Tennessee Technological University, Cookeville, Tennessee 38505 (United States); Gonçalves, Icaro, E-mail: icaro.goncalves@ufabc.edu.br [Instituto de Matemática e Estatística, Universidade de São Paulo, Rua do Matão, 1010, 05508-090, São Paulo, SP (Brazil); Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, 09210-170 Santo André, SP (Brazil); Rocha, Roldão da, E-mail: roldao.rocha@ufabc.edu.br [Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, 09210-170 Santo André, SP (Brazil); International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste (Italy)
2014-10-15
Classification of quantum spinor fields according to quantum bilinear covariants is introduced in a context of quantum Clifford algebras on Minkowski spacetime. Once the bilinear covariants are expressed in terms of algebraic spinor fields, the duality between spinor and quantum spinor fields can be discussed. Thus, by endowing the underlying spacetime with an arbitrary bilinear form with an antisymmetric part in addition to a symmetric spacetime metric, quantum algebraic spinor fields and deformed bilinear covariants can be constructed. They are thus compared to the classical (non quantum) ones. Classes of quantum spinor fields classes are introduced and compared with Lounesto's spinor field classification. A physical interpretation of the deformed parts and the underlying Z-grading is proposed. The existence of an arbitrary bilinear form endowing the spacetime already has been explored in the literature in the context of quantum gravity [S. W. Hawking, “The unpredictability of quantum gravity,” Commun. Math. Phys. 87, 395 (1982)]. Here, it is shown further to play a prominent role in the structure of Dirac, Weyl, and Majorana spinor fields, besides the most general flagpoles and flag-dipoles. We introduce a new duality between the standard and the quantum spinor fields, by showing that when Clifford algebras over vector spaces endowed with an arbitrary bilinear form are taken into account, a mixture among the classes does occur. Consequently, novel features regarding the spinor fields can be derived.
Resting-state brain organization revealed by functional covariance networks.
Directory of Open Access Journals (Sweden)
Zhiqiang Zhang
Full Text Available BACKGROUND: Brain network studies using techniques of intrinsic connectivity network based on fMRI time series (TS-ICN and structural covariance network (SCN have mapped out functional and structural organization of human brain at respective time scales. However, there lacks a meso-time-scale network to bridge the ICN and SCN and get insights of brain functional organization. METHODOLOGY AND PRINCIPAL FINDINGS: We proposed a functional covariance network (FCN method by measuring the covariance of amplitude of low-frequency fluctuations (ALFF in BOLD signals across subjects, and compared the patterns of ALFF-FCNs with the TS-ICNs and SCNs by mapping the brain networks of default network, task-positive network and sensory networks. We demonstrated large overlap among FCNs, ICNs and SCNs and modular nature in FCNs and ICNs by using conjunctional analysis. Most interestingly, FCN analysis showed a network dichotomy consisting of anti-correlated high-level cognitive system and low-level perceptive system, which is a novel finding different from the ICN dichotomy consisting of the default-mode network and the task-positive network. CONCLUSION: The current study proposed an ALFF-FCN approach to measure the interregional correlation of brain activity responding to short periods of state, and revealed novel organization patterns of resting-state brain activity from an intermediate time scale.
Analysis of covariance with incomplete data via semiparametric model transformations.
Grigoletto, M; Akritas, M G
1999-12-01
We propose a method for fitting semiparametric models such as the proportional hazards (PH), additive risks (AR), and proportional odds (PO) models. Each of these semiparametric models implies that some transformation of the conditional cumulative hazard function (at each t) depends linearly on the covariates. The proposed method is based on nonparametric estimation of the conditional cumulative hazard function, forming a weighted average over a range of t-values, and subsequent use of least squares to estimate the parameters suggested by each model. An approximation to the optimal weight function is given. This allows semiparametric models to be fitted even in incomplete data cases where the partial likelihood fails (e.g., left censoring, right truncation). However, the main advantage of this method rests in the fact that neither the interpretation of the parameters nor the validity of the analysis depend on the appropriateness of the PH or any of the other semiparametric models. In fact, we propose an integrated method for data analysis where the role of the various semiparametric models is to suggest the best fitting transformation. A single continuous covariate and several categorical covariates (factors) are allowed. Simulation studies indicate that the test statistics and confidence intervals have good small-sample performance. A real data set is analyzed.
A geometric rationale for invariance, covariance and constitutive relations
Romano, Giovanni; Barretta, Raffaele; Diaco, Marina
2018-01-01
There are, in each branch of science, statements which, expressed in ambiguous or even incorrect but seemingly friendly manner, were repeated for a long time and eventually became diffusely accepted. Objectivity of physical fields and of their time rates and frame indifference of constitutive relations are among such notions. A geometric reflection on the description of frame changes as spacetime automorphisms, on induced push-pull transformations and on proper physico-mathematical definitions of material, spatial and spacetime tensor fields and of their time-derivatives along the motion, is here carried out with the aim of pointing out essential notions and of unveiling false claims. Theoretical and computational aspects of nonlinear continuum mechanics, and especially those pertaining to constitutive relations, involving material fields and their time rates, gain decisive conceptual and operative improvement from a proper geometric treatment. Outcomes of the geometric analysis are frame covariance of spacetime velocity, material stretching and material spin. A univocal and frame-covariant tool for evaluation of time rates of material fields is provided by the Lie derivative along the motion. The postulate of frame covariance of material fields is assessed to be a natural physical requirement which cannot interfere with the formulation of constitutive laws, with claims of the contrary stemming from an improper imposition of equality in place of equivalence.
Eves, Howard
1980-01-01
The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri
The correlation matrix of Higgs rates at the LHC
Energy Technology Data Exchange (ETDEWEB)
Arbey, Alexandre [Univ Lyon, Univ Lyon 1, ENS de Lyon, CNRS,Centre de Recherche Astrophysique de Lyon UMR5574,F-69230 Saint-Genis-Laval (France); Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); Fichet, Sylvain [ICTP-SAIFR & IFT-UNESP,Rua Dr. Bento Teobaldo Ferraz 271, Sao Paulo (Brazil); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, ENS de Lyon, CNRS,Centre de Recherche Astrophysique de Lyon UMR5574,F-69230 Saint-Genis-Laval (France); Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); Moreau, Grégory [Laboratoire de Physique Théorique, CNRS, Université Paris-Sud 11, Bât. 210, F-91405 Orsay Cedex (France)
2016-11-17
The imperfect knowledge of the Higgs boson decay rates and cross sections at the LHC constitutes a critical systematic uncertainty in the study of the Higgs boson properties. We show that the full covariance matrix between the Higgs rates can be determined from the most elementary sources of uncertainty by a direct application of probability theory. We evaluate the error magnitudes and full correlation matrix on the set of Higgs cross sections and branching ratios at √s=7, 8, 13 and 14 TeV, which are provided in ancillary files. The impact of this correlation matrix on the global fits is illustrated with the latest 7+8 TeV Higgs dataset.
The correlation matrix of Higgs rates at the LHC
Arbey, Alexandre; Mahmoudi, Farvah; Moreau, Grégory
2016-11-17
The imperfect knowledge of the Higgs boson LHC cross sections and decay rates constitutes a critical systematic uncertainty in the study of the Higgs boson properties. We show that the full covariance matrix between the Higgs rates can be determined from the most elementary sources of uncertainty by a direct application of probability theory. We evaluate the error magnitudes and full correlation matrix on the set of Higgs cross sections and partial decay widths at $\\sqrt{s}=7$, $8$, $13$ and $14$~TeV, which are provided in ancillary files. The impact of this correlation matrix on the global fits is illustrated with the latest $7$+$8$ TeV Higgs dataset.
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
The "Pesticide-exposure Matrix" was developed to help epidemiologists and other researchers identify the active ingredients to which people were likely exposed when their homes and gardens were treated for pests in past years.
Tendon functional extracellular matrix.
Screen, Hazel R C; Berk, David E; Kadler, Karl E; Ramirez, Francesco; Young, Marian F
2015-06-01
This article is one of a series, summarizing views expressed at the Orthopaedic Research Society New Frontiers in Tendon Research Conference. This particular article reviews the three workshops held under the "Functional Extracellular Matrix" stream. The workshops focused on the roles of the tendon extracellular matrix, such as performing the mechanical functions of tendon, creating the local cell environment, and providing cellular cues. Tendon is a complex network of matrix and cells, and its biological functions are influenced by widely varying extrinsic and intrinsic factors such as age, nutrition, exercise levels, and biomechanics. Consequently, tendon adapts dynamically during development, aging, and injury. The workshop discussions identified research directions associated with understanding cell-matrix interactions to be of prime importance for developing novel strategies to target tendon healing or repair. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Bedford, J; Papageorgakis, C.; Rodriguez-Gomez, D.; Ward, J.
2007-01-01
Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...
The Matrix Organization Revisited
DEFF Research Database (Denmark)
Gattiker, Urs E.; Ulhøi, John Parm
1999-01-01
This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg; Borlund, Pia
2007-01-01
The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such c...... and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures....
Czerwinski, Michael; Spence, Jason R
2017-01-05
Recently in Nature, Gjorevski et al. (2016) describe a fully defined synthetic hydrogel that mimics the extracellular matrix to support in vitro growth of intestinal stem cells and organoids. The hydrogel allows exquisite control over the chemical and physical in vitro niche and enables identification of regulatory properties of the matrix. Copyright © 2017 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lepretre, A.; Herault, N. [CEA Saclay, Dept. d' Astrophysique de Physique des Particules, de Physique Nucleaire et de l' Instrumentation Associee, 91- Gif sur Yvette (France); Brusegan, A.; Noguere, G.; Siegler, P. [Institut des Materiaux et des Metrologies - IRMM, Joint Research Centre, Gell (Belgium)
2002-12-01
This report is a follow up of the report CEA DAPNIA/SPHN-99-04T of Vincent Gressier. In the frame of a collaboration between the 'Commissariat a l'Energie Atomique (CEA)' and the Institute for Reference Materials and Measurement (IRMM, Geel, Belgique), the resonance parameters of neptunium 237 have been determined in the energy interval between 0 and 500 eV. These parameters have been obtained by using the Refit code in analysing simultaneously three transmission experiments. The covariance matrix of statistical origin is provided. A new method, based on various sensitivity studies is proposed for determining also the covariance matrix of systematic origin, relating the resonance parameters. From an experimental viewpoint, the study indicated that, with a large probability, the background spectrum has structure. A two dimensional profiler for the neutron density has been proved feasible. Such a profiler could, among others, demonstrate the existence of the structured background. (authors)
Radial Covariance Functions Motivated by Spatial Random Field Models with Local Interactions
Hristopulos, Dionissios T.
2014-01-01
We derive explicit expressions for a family of radially symmetric, non-differentiable, Spartan covariance functions in $\\mathbb{R}^2$ that involve the modified Bessel function of the second kind. In addition to the characteristic length and the amplitude coefficient, the Spartan covariance parameters include the rigidity coefficient $\\eta_{1}$ which determines the shape of the covariance function. If $ \\eta_{1} >> 1$ Spartan covariance functions exhibit multiscaling. We also derive a family o...
Combined Use of Integral Experiments and Covariance Data
Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.
2014-04-01
In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.
Moderating the covariance between family member's substance use behavior.
Verhulst, Brad; Eaves, Lindon J; Neale, Michael C
2014-07-01
Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox.
Kiehn, R. M.
1976-01-01
With respect to irreversible, non-homeomorphic maps, contravariant and covariant tensor fields have distinctly natural covariance and transformational behavior. For thermodynamic processes which are non-adiabatic, the fact that the process cannot be represented by a homeomorphic map emphasizes the logical arrow of time, an idea which encompasses a principle of retrodictive determinism for covariant tensor fields.
GARCH modelling of covariance in dynamical estimation of inverse solutions
Energy Technology Data Exchange (ETDEWEB)
Galka, Andreas [Institute of Experimental and Applied Physics, University of Kiel, 24098 Kiel (Germany) and Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)]. E-mail: galka@physik.uni-kiel.de; Yamashita, Okito [ATR Computational Neuroscience Laboratories, Hikaridai 2-2-2, Kyoto 619-0288 (Japan); Ozaki, Tohru [Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)
2004-12-06
The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.
Conformal invariant cosmological perturbations via the covariant approach
Li, Mingzhe
2015-01-01
It is known that some cosmological perturbations are conformal invariant. This facilitates the studies of perturbations within some gravitational theories alternative to general relativity, for example the scalar-tensor theory, because it is possible to do equivalent analysis in a certain frame in which the perturbation equations are simpler. In this paper we revisit the problem of conformal invariances of cosmological perturbations in terms of the covariant approach in which the perturbation variables have clear geometric and physical meanings. We show that with this approach the conformal invariant perturbations are easily identified.
Problems and Progress in Covariant High Spin Description
Kirchbach, Mariana; Banda Guzmán, Víctor Miguel
2016-10-01
A universal description of particles with spins j > 1, transforming in (j, 0) ⊕ (0, j), is developed by means of representation specific second order differential wave equations without auxiliary conditions and in covariant bases such as Lorentz tensors for bosons, Lorentz-tensors with Dirac spinor components for fermions, or, within the basis of the more fundamental Weyl- Van-der-Waerden sl(2,C) spinor-tensors. At the root of the method, which is free from the pathologies suffered by the traditional approaches, are projectors constructed from the Casimir invariants of the spin-Lorentz group, and the group of translations in the Minkowski space time.
Covariance-based maneuver optimization for NEO threats
Peterson, G.
The Near Earth Object (NEO) conjunction analysis and mitigation problem is fundamentally the same as Earth-centered space traffic control, albeit on a larger scale and in different temporal and spatial frames. The Aerospace Corporation has been conducting conjunction detection and collision avoidance analysis for a variety of satellite systems in the Earth environment for over 3 years. As part of this process, techniques have been developed that are applicable to analyzing the NEO threat. In space traffic control operations in the Earth orbiting environment, dangerous conjunctions between satellites are determined using collision probability models, realistic covariances, and accurate trajectories in the software suite Collision Vision. Once a potentially dangerous conjunction (or series of conjunctions) is found, a maneuver solution is developed through the program DVOPT (DeltaV OPTimization) that will reduce the risk to a pre -defined acceptable level. DVOPT works by taking the primary's state vector at conjunction, back- propagating it to the time of the proposed burn, then applying the burn to the state vector, and forward-propagating back to the time of the original conjunction. The probability of collision is then re-computed based upon the new state vector and original covariances. This backwards-forwards propagation is coupled with a search algorithm to find the optimal burn solution as a function of time. Since the burns are small (typically cm/sec for Earth-centered space traffic control), Kepler's Equation was assumed for the backwards-forwards propagation with little loss in accuracy. The covariance-based DVOPT process can be easily expanded to cover heliocentric orbits and conjunctions between the Earth and an approaching object. It is shown that minimizing the burn to increase the miss distance between the conjuncting objects does not correspond to a burn solution that minimizes the probability of impact between the same two objects. Since a
Covariances for the 56Fe radiation damage cross sections
Simakov, Stanislav P.; Koning, Arjan; Konobeyev, Alexander Yu.
2017-09-01
The energy-energy and reaction-reaction covariance matrices were calculated for the n + 56Fe damage cross-sections by Total Monte Carlo method using the TENDL-2013 random files. They were represented in the ENDF-6 format and added to the unperturbed evaluation file. The uncertainties for the spectrum averaged radiation quantities in the representative fission, fusion and spallation facilities were first time assessed as 5-25%. Additional 5 to 20% have to be added to the atom displacement rate uncertainties to account for accuracy of the primary defects simulation in materials. The reaction-reaction correlation were shown to be 1% or less.
Lorentz-like covariant equations of non-relativistic fluids
Montigny, M D; Santana, A E
2003-01-01
We use a geometrical formalism of Galilean invariance to build various hydrodynamics models. It consists in embedding the Newtonian spacetime into a non-Euclidean 4 + 1 space and provides thereby a procedure that unifies models otherwise apparently unrelated. After expressing the Navier-Stokes equation within this framework, we show that slight modifications of its Lagrangian allow us to recover the Chaplygin equation of state as well as models of superfluids for liquid helium (with both its irrotational and rotational components). Other fluid equations are also expressed in a covariant form.
Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Locality in the gauge-covariant field theory of strings
Energy Technology Data Exchange (ETDEWEB)
Kaku, Michio
1985-11-07
Recently, we wrote down the gauge-covariant field theory of the free bosonic, super, and heterotic strings. These second quantized actions were derived from path integrals in the same way as Feynman derived the Schroedinger equation. These actions possess all the local gauge invariance of the super Virasoro algebra. These actions, however, are non-local. It has been conjectured that these actions can be made local by adding auxiliary fields. In this paper, we prove this conjecture to all orders, making our action explicitly local. (orig.).
Local Lorentz covariance in finite-dimensional local quantum physics
Raasakka, Matti
2017-10-01
We show that local Lorentz covariance arises canonically as the group of transformations between local thermal states in the framework of local quantum physics, given the following three postulates: (i) Local observable algebras are finite-dimensional. (ii) Minimal local observable algebras are isomorphic to M2(C ) , the observable algebra of a single qubit. (iii) The vacuum restricted to any minimal local observable algebra is a nonmaximally mixed thermal state. The derivation reveals a new and surprising relation between spacetime structure and local quantum states. In particular, we show how local restrictions of the vacuum can determine the connection between different local inertial reference frames.
Galilean covariance and non-relativistic Bhabha equations
Energy Technology Data Exchange (ETDEWEB)
Montigny, M. de [Faculte Saint-Jean, University of Alberta, Edmonton, AB (Canada) and Theoretical Physics Institute, University of Alberta, Edmonton, AB (Canada)]. E-mail: montigny@phys.ualberta.ca; Khanna, F.C. [Theoretical Physics Institute, University of Alberta, Edmonton, AB (CA) and TRIUMF, Vancouver, BC (Canada)]. E-mail: khanna@phys.ualberta.ca; Santana, A.E. [Theoretical Physics Institute, University of Alberta, Edmonton, AB (CA) and Instituto de Fisica, Universidade Federal da Bahia, Salvador, Bahia (Brazil)]. E-mail: santana@fis.ufba.br; Santos, E.S. [Instituto de Fisica Teorica, Universidade Estadual Paulista, Sao Paulo, SP (Brazil)]. E-mail: esdras@ift.unesp.br
2001-10-26
We apply a five-dimensional formulation of Galilean covariance to construct non-relativistic Bhabha first-order wave equations which, depending on the representation, correspond either to the well known Dirac equation (for particles with spin 1/2) or the Duffin-Kemmer-Petiau equation (for spinless and spin 1 particles). Here the irreducible representations belong to the Lie algebra of the 'de Sitter group' in 4+1 dimensions, SO(5,1). Using this approach, the non-relativistic limits of the corresponding equations are obtained directly, without taking any low-velocity approximation. As a simple illustration, we discuss the harmonic oscillator. (author)
Bhatia, Rajendra
2013-01-01
This book is an outcome of the Indo-French Workshop on Matrix Information Geometries (MIG): Applications in Sensor and Cognitive Systems Engineering, which was held in Ecole Polytechnique and Thales Research and Technology Center, Palaiseau, France, in February 23-25, 2011. The workshop was generously funded by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR). During the event, 22 renowned invited french or indian speakers gave lectures on their areas of expertise within the field of matrix analysis or processing. From these talks, a total of 17 original contribution or state-of-the-art chapters have been assembled in this volume. All articles were thoroughly peer-reviewed and improved, according to the suggestions of the international referees. The 17 contributions presented are organized in three parts: (1) State-of-the-art surveys & original matrix theory work, (2) Advanced matrix theory for radar processing, and (3) Matrix-based signal processing applications.
Pérez López, César
2014-01-01
MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ...
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands
2009-01-01
We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....
Energy Technology Data Exchange (ETDEWEB)
Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory
2010-01-01
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.
Matrixed business support comparison study.
Energy Technology Data Exchange (ETDEWEB)
Parsons, Josh D.
2004-11-01
The Matrixed Business Support Comparison Study reviewed the current matrixed Chief Financial Officer (CFO) division staff models at Sandia National Laboratories. There were two primary drivers of this analysis: (1) the increasing number of financial staff matrixed to mission customers and (2) the desire to further understand the matrix process and the opportunities and challenges it creates.
A Covariance Generation Methodology for Fission Product Yields
Directory of Open Access Journals (Sweden)
Terranova N.
2016-01-01
Full Text Available Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1 no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation, developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.
A Covariance Generation Methodology for Fission Product Yields
Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.
2016-03-01
Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.
Implementing phase-covariant cloning in circuit quantum electrodynamics
Energy Technology Data Exchange (ETDEWEB)
Zhu, Meng-Zheng [School of Physics and Material Science, Anhui University, Hefei 230039 (China); School of Physics and Electronic Information, Huaibei Normal University, Huaibei 235000 (China); Ye, Liu, E-mail: yeliu@ahu.edu.cn [School of Physics and Material Science, Anhui University, Hefei 230039 (China)
2016-10-15
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC) transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.
Batalin-Vilkovisky formalism in locally covariant field theory
Energy Technology Data Exchange (ETDEWEB)
Rejzner, Katarzyna Anna
2011-12-15
The present work contains a complete formulation of the Batalin-Vilkovisky (BV) formalism in the framework of locally covariant field theory. In the first part of the thesis the classical theory is investigated with a particular focus on the infinite dimensional character of the underlying structures. It is shown that the use of infinite dimensional differential geometry allows for a conceptually clear and elegant formulation. The construction of the BV complex is performed in a fully covariant way and we also generalize the BV framework to a more abstract level, using functors and natural transformations. In this setting we construct the BV complex for classical gravity. This allows us to give a homological interpretation to the notion of diffeomorphism invariant physical quantities in general relativity. The second part of the thesis concerns the quantum theory. We provide a framework for the BV quantization that doesn't rely on the path integral formalism, but is completely formulated within perturbative algebraic quantum field theory. To make such a formulation possible we first prove that the renormalized time-ordered product can be understood as a binary operation on a suitable domain. Using this result we prove the associativity of this product and provide a consistent framework for the renormalized BV structures. In particular the renormalized quantum master equation and the renormalized quantum BV operator are defined. To give a precise meaning to theses objects we make a use of the master Ward identity, which is an important structure in causal perturbation theory. (orig.)
Covariant phase space, constraints, gauge and the Peierls formula
Khavkine, Igor
2014-02-01
It is well known that both the symplectic structure and the Poisson brackets of classical field theory can be constructed directly from the Lagrangian in a covariant way, without passing through the noncovariant canonical Hamiltonian formalism. This is true even in the presence of constraints and gauge symmetries. These constructions go under the names of the covariant phase space formalism and the Peierls bracket. We review both of them, paying more careful attention, than usual, to the precise mathematical hypotheses that they require, illustrating them in examples. Also an extensive historical overview of the development of these constructions is provided. The novel aspect of our presentation is a significant expansion and generalization of an elegant and quite recent argument by Forger and Romero showing the equivalence between the resulting symplectic and Poisson structures without passing through the canonical Hamiltonian formalism as an intermediary. We generalize it to cover theories with constraints and gauge symmetries and formulate precise sufficient conditions under which the argument holds. These conditions include a local condition on the equations of motion that we call hyperbolizability, and some global conditions of cohomological nature. The details of our presentation may shed some light on subtle questions related to the Poisson structure of gauge theories and their quantization.
Isoprene flux measurements using eddy covariance and disjunct eddy accumulation
Pressley, S. N.; Lamb, B.; Westberg, H.; Allwine, G.; Turnipseed, A.; Guenther, A.
2005-12-01
Quantifying biogenic hydrocarbon (BHC) emissions is important for understanding the role they play in tropospheric chemistry. Isoprene is a very reactive compound that affects the oxidative capacity of the atmosphere, which in turn determines the lifetime of numerous atmospheric constituents such as methane (CH4) and CO. The oxidation of isoprene leads to the production of peroxy radicals (RO2), which may lead to the formation of organic acids, or depending on the level of nitric oxides present, to either production or consumption of tropospheric O3. BHC emissions, in particular isoprene, are predominantly driven by increases in temperature and solar radiation, and there can be significant variations in emissions from one hour to the next, and between days. To better understand the natural variability of isoprene emissions, eddy covariance isoprene flux measurements are being collected on a long-term basis. This long-term dataset, spanning from 1999-2005, provides a unique tool for validating biogenic emission inventories that are used as input into regional photochemical models. This long-term dataset will be presented and compared to the biogenic emission inventory system (BEIS3) model estimates. Using isoprene as a compound of interest, the micrometeorological technique of disjunct eddy accumulation (DEA) was tested side-by-side with the direct eddy covariance (EC) technique. One week of DEA and EC hourly flux measurements will be presented, confirming the use of DEA to measure fluxes of other atmospheric compounds that, to date, has not been attainable.
Stress contagion: physiological covariation between mothers and infants.
Waters, Sara F; West, Tessa V; Mendes, Wendy Berry
2014-04-01
Emotions are not simply concepts that live privately in the mind, but rather affective states that emanate from the individual and may influence others. We explored affect contagion in the context of one of the closest dyadic units, mother and infant. We initially separated mothers and infants; randomly assigned the mothers to experience a stressful positive-evaluation task, a stressful negative-evaluation task, or a nonstressful control task; and then reunited the mothers and infants. Three notable findings were obtained: First, infants' physiological reactivity mirrored mothers' reactivity engendered by the stress manipulation. Second, infants whose mothers experienced social evaluation showed more avoidance toward strangers compared with infants whose mothers were in the control condition. Third, the negative-evaluation condition, compared with the other conditions, generated greater physiological covariation in the dyads, and this covariation increased over time. These findings suggest that mothers' stressful experiences are contagious to their infants and that members of close pairs, like mothers and infants, can reciprocally influence each other's dynamic physiological reactivity.
Hohn, Franz E
2012-01-01
This complete and coherent exposition, complemented by numerous illustrative examples, offers readers a text that can teach by itself. Fully rigorous in its treatment, it offers a mathematically sound sequencing of topics. The work starts with the most basic laws of matrix algebra and progresses to the sweep-out process for obtaining the complete solution of any given system of linear equations - homogeneous or nonhomogeneous - and the role of matrix algebra in the presentation of useful geometric ideas, techniques, and terminology.Other subjects include the complete treatment of the structur
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Randomization-based adjustment of multiple treatment hazard ratios for covariates with missing data.
Lam, Diana; Koch, Gary G; Preisser, John S; Saville, Benjamin R; Hussey, Michael A
2017-01-01
Clinical trials are designed to compare treatment effects when applied to samples from the same population. Randomization is used so that the samples are not biased with respect to baseline covariates that may influence the efficacy of the treatment. We develop randomization-based covariance adjustment methodology to estimate the log hazard ratios and their confidence intervals of multiple treatments in a randomized clinical trial with time-to-event outcomes and missingness among the baseline covariates. The randomization-based covariance adjustment method is a computationally straight-forward method for handling missing baseline covariate values.
Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms
Directory of Open Access Journals (Sweden)
Fernando A. Auat Cheein
2013-01-01
Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.
Wang, Ming; Kong, Lan; Li, Zheng; Zhang, Lijun
2016-05-10
Generalized estimating equations (GEE) is a general statistical method to fit marginal models for longitudinal data in biomedical studies. The variance-covariance matrix of the regression parameter coefficients is usually estimated by a robust "sandwich" variance estimator, which does not perform satisfactorily when the sample size is small. To reduce the downward bias and improve the efficiency, several modified variance estimators have been proposed for bias-correction or efficiency improvement. In this paper, we provide a comprehensive review on recent developments of modified variance estimators and compare their small-sample performance theoretically and numerically through simulation and real data examples. In particular, Wald tests and t-tests based on different variance estimators are used for hypothesis testing, and the guideline on appropriate sample sizes for each estimator is provided for preserving type I error in general cases based on numerical results. Moreover, we develop a user-friendly R package "geesmv" incorporating all of these variance estimators for public usage in practice. Copyright © 2015 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Chocat Rudy
2015-01-01
Full Text Available The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.
CSIR Research Space (South Africa)
Salmon, Brian P
2013-06-01
Full Text Available In this paper, the internal operations of an Extended Kalman Filter is investigated to observe if information can be derived to detect land cover change in a MODerate-resolution Imaging Spectroradiometer (MODIS) time series. The concept is based...
M.D. de Pooter (Michiel); M.P.E. Martens (Martin); D.J.C. van Dijk (Dick)
2005-01-01
textabstractThis paper investigates the merits of high-frequency intraday data when forming minimum variance portfolios and minimum tracking error portfolios with daily rebalancing from the individual constituents of the S&P 100 index. We focus on the issue of determining the optimal sampling
Thompson, Bruce; Borrello, Gloria M.
Attitude measures frequently produce distributions of item scores that attenuate interitem correlations and thus also distort findings regarding the factor structure underlying the items. An actual data set involving 260 adult subjects' responses to 55 items on the Love Relationships Scale is employed to illustrate empirical methods for…
el Bachraoui, M.; van de Vel, M.L.J.
2002-01-01
Square matrices over a relation algebra are relation algebras in a natural way. We show that for fixed n, these algebras can be characterized as reducts of some richer kind of algebra. Hence for fixed n, the class of n × n matrix relation algebras has a first-order characterization. As a
Kernelized Bayesian Matrix Factorization.
Gönen, Mehmet; Kaski, Samuel
2014-10-01
We extend kernelized matrix factorization with a full-Bayesian treatment and with an ability to work with multiple side information sources expressed as different kernels. Kernels have been introduced to integrate side information about the rows and columns, which is necessary for making out-of-matrix predictions. We discuss specifically binary output matrices but extensions to realvalued matrices are straightforward. We extend the state of the art in two key aspects: (i) A full-conjugate probabilistic formulation of the kernelized matrix factorization enables an efficient variational approximation, whereas full-Bayesian treatments are not computationally feasible in the earlier approaches. (ii) Multiple side information sources are included, treated as different kernels in multiple kernel learning which additionally reveals which side sources are informative. We then show that the framework can also be used for supervised and semi-supervised multilabel classification and multi-output regression, by considering samples and outputs as the domains where matrix factorization operates. Our method outperforms alternatives in predicting drug-protein interactions on two data sets. On multilabel classification, our algorithm obtains the lowest Hamming losses on 10 out of 14 data sets compared to five state-of-the-art multilabel classification algorithms. We finally show that the proposed approach outperforms alternatives in multi-output regression experiments on a yeast cell cycle data set.
Indian Academy of Sciences (India)
chaos to galaxies. We demonstrate the applicability of random matrix theory for networks by pro- viding a new dimension to complex systems research. We show that in spite of huge differences ... as mentioned earlier, different types of networks can be constructed based on the nature of connections. For example,.
Elliott, John
2012-09-01
As part of our 'toolkit' for analysing an extraterrestrial signal, the facility for calculating structural affinity to known phenomena must be part of our core capabilities. Without such a resource, we risk compromising our potential for detection and decipherment or at least causing significant delay in the process. To create such a repository for assessing structural affinity, all known systems (language parameters) need to be structurally analysed to 'place' their 'system' within a relational communication matrix. This will need to include all known variants of language structure, whether 'living' (in current use) or ancient; this must also include endeavours to incorporate yet undeciphered scripts and non-human communication, to provide as complete a picture as possible. In creating such a relational matrix, post-detection decipherment will be assisted by a structural 'map' that will have the potential for 'placing' an alien communication with its nearest known 'neighbour', to assist subsequent categorisation of basic parameters as a precursor to decipherment. 'Universal' attributes and behavioural characteristics of known communication structure will form a range of templates (Elliott, 2001 [1] and Elliott et al., 2002 [2]), to support and optimise our attempt at categorising and deciphering the content of an extraterrestrial signal. Detection of the hierarchical layers, which comprise intelligent, complex communication, will then form a matrix of calculations that will ultimately score affinity through a relational matrix of structural comparison. In this paper we develop the rationales and demonstrate functionality with initial test results.
Agilan, V.; Umamahesh, N. V.
2017-03-01
Present infrastructure design is primarily based on rainfall Intensity-Duration-Frequency (IDF) curves with so-called stationary assumption. However, in recent years, the extreme precipitation events are increasing due to global climate change and creating non-stationarity in the series. Based on recent theoretical developments in the Extreme Value Theory (EVT), recent studies proposed a methodology for developing non-stationary rainfall IDF curve by incorporating trend in the parameters of the Generalized Extreme Value (GEV) distribution using Time covariate. But, the covariate Time may not be the best covariate and it is important to analyze all possible covariates and find the best covariate to model non-stationarity. In this study, five physical processes, namely, urbanization, local temperature changes, global warming, El Niño-Southern Oscillation (ENSO) cycle and Indian Ocean Dipole (IOD) are used as covariates. Based on these five covariates and their possible combinations, sixty-two non-stationary GEV models are constructed. In addition, two non-stationary GEV models based on Time covariate and one stationary GEV model are also constructed. The best model for each duration rainfall series is chosen based on the corrected Akaike Information Criterion (AICc). From the findings of this study, it is observed that the local processes (i.e., Urbanization, local temperature changes) are the best covariate for short duration rainfall and global processes (i.e., Global warming, ENSO cycle and IOD) are the best covariate for the long duration rainfall of the Hyderabad city, India. Furthermore, the covariate Time is never qualified as the best covariate. In addition, the identified best covariates are further used to develop non-stationary rainfall IDF curves of the Hyderabad city. The proposed methodology can be applied to other situations to develop the non-stationary IDF curves based on the best covariate.
Analytical techniques for instrument design - matrix methods
Energy Technology Data Exchange (ETDEWEB)
Robinson, R.A. [Los Alamos National Lab., NM (United States)
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Managing distance and covariate information with point-based clustering
Directory of Open Access Journals (Sweden)
Peter A. Whigham
2016-09-01
Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for
Parallel evolution controlled by adaptation and covariation in ammonoid cephalopods
Directory of Open Access Journals (Sweden)
Klug Christian
2011-04-01
Full Text Available Abstract Background A major goal in evolutionary biology is to understand the processes that shape the evolutionary trajectory of clades. The repeated and similar large-scale morphological evolutionary trends of distinct lineages suggest that adaptation by means of natural selection (functional constraints is the major cause of parallel evolution, a very common phenomenon in extinct and extant lineages. However, parallel evolution can result from other processes, which are usually ignored or difficult to identify, such as developmental constraints. Hence, understanding the underlying processes of parallel evolution still requires further research. Results Herein, we present a possible case of parallel evolution between two ammonoid lineages (Auguritidae and Pinacitidae of Early-Middle Devonian age (405-395 Ma, which are extinct cephalopods with an external, chambered shell. In time and through phylogenetic order of appearance, both lineages display a morphological shift toward more involute coiling (i.e. more tightly coiled whorls, larger adult body size, more complex suture line (the folded walls separating the gas-filled buoyancy-chambers, and the development of an umbilical lid (a very peculiar extension of the lateral shell wall covering the umbilicus in the most derived taxa. Increased involution toward shells with closed umbilicus has been demonstrated to reflect improved hydrodynamic properties of the shell and thus likely results from similar natural selection pressures. The peculiar umbilical lid might have also added to the improvement of the hydrodynamic properties of the shell. Finally, increasing complexity of suture lines likely results from covariation induced by trends of increasing adult size and whorl overlap given the morphogenetic properties of the suture. Conclusions The morphological evolution of these two Devonian ammonoid lineages follows a near parallel evolutionary path for some important shell characters during several
Parallel evolution controlled by adaptation and covariation in ammonoid cephalopods.
Monnet, Claude; De Baets, Kenneth; Klug, Christian
2011-04-29
A major goal in evolutionary biology is to understand the processes that shape the evolutionary trajectory of clades. The repeated and similar large-scale morphological evolutionary trends of distinct lineages suggest that adaptation by means of natural selection (functional constraints) is the major cause of parallel evolution, a very common phenomenon in extinct and extant lineages. However, parallel evolution can result from other processes, which are usually ignored or difficult to identify, such as developmental constraints. Hence, understanding the underlying processes of parallel evolution still requires further research. Herein, we present a possible case of parallel evolution between two ammonoid lineages (Auguritidae and Pinacitidae) of Early-Middle Devonian age (405-395 Ma), which are extinct cephalopods with an external, chambered shell. In time and through phylogenetic order of appearance, both lineages display a morphological shift toward more involute coiling (i.e. more tightly coiled whorls), larger adult body size, more complex suture line (the folded walls separating the gas-filled buoyancy-chambers), and the development of an umbilical lid (a very peculiar extension of the lateral shell wall covering the umbilicus) in the most derived taxa. Increased involution toward shells with closed umbilicus has been demonstrated to reflect improved hydrodynamic properties of the shell and thus likely results from similar natural selection pressures. The peculiar umbilical lid might have also added to the improvement of the hydrodynamic properties of the shell. Finally, increasing complexity of suture lines likely results from covariation induced by trends of increasing adult size and whorl overlap given the morphogenetic properties of the suture. The morphological evolution of these two Devonian ammonoid lineages follows a near parallel evolutionary path for some important shell characters during several million years and through their phylogeny. Evolution
Covariant generalized holographic dark energy and accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Nojiri, Shin' ichi [Nagoya University, Department of Physics, Nagoya (Japan); Nagoya University, Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya (Japan); Odintsov, S.D. [ICREA, Barcelona (Spain); Institute of Space Sciences (IEEC-CSIC), Barcelona (Spain); National Research Tomsk State University, Tomsk (Russian Federation); Tomsk State Pedagogical University, Tomsk (Russian Federation)
2017-08-15
We propose the generalized holographic dark energy model where the infrared cutoff is identified with the combination of the FRW universe parameters: the Hubble rate, particle and future horizons, cosmological constant, the universe lifetime (if finite) and their derivatives. It is demonstrated that with the corresponding choice of the cutoff one can map such holographic dark energy to modified gravity or gravity with a general fluid. Explicitly, F(R) gravity and the general perfect fluid are worked out in detail and the corresponding infrared cutoff is found. Using this correspondence, we get realistic inflation or viable dark energy or a unified inflationary-dark energy universe in terms of covariant holographic dark energy. (orig.)
Time and Fermions: General Covariance vs. Ockham's Razor for Spinors
Pitts, J Brian
2015-01-01
It is a commonplace attributed to Kretschmann that any local physical theory can be represented in arbitrary coordinates using tensor calculus. But the literature also claims that spinors _as such_ cannot be represented in coordinates in a curved space-time. These commonplaces are inconsistent, so what is general covariance for fermions? In fact both commonplaces are wrong. Ogievetsky and Polubarinov (OP) constructed spinors in coordinates in 1965, enhancing the unity of physics and helping to spawn nonlinear group representations. Roughly and locally, OP spinors resemble the orthonormal basis or tetrad formalism in the symmetric gauge, but they are conceptually self-sufficient and more economical. The tetrad formalism is thus de-Ockhamized, with six extra components and six compensating gauge symmetries. As developed nonperturbatively by Bilyalov, OP spinors admit any coordinates at a point, but 'time' must be listed first; 'time' is defined by an eigenvalue problem involving the metric and diag(-1,1,1,1), t...
Generally covariant dynamical reduction models and the Hadamard condition
Juárez-Aubry, Benito A.; Kay, Bernard S.; Sudarsky, Daniel
2018-01-01
We provide general guidelines for generalizing dynamical reduction models to curved spacetimes and propose a class of generally covariant relativistic versions of the Ghirardi-Rimini-Weber model. We anticipate that the collapse operators of our class of models may play a role in a yet-to-be-formulated theory of semiclassical gravity with collapses. We show explicitly that the collapse operators map a dense domain of states that are initially Hadamard to final Hadamard states—a property that we expect will be needed for the construction of such a semiclassical theory. Finally, we provide a simple example in which we explicitly compute the violations in energy-momentum due to the state reduction process and conclude that this violation is of the order of a parameter of the model—supposed to be small.
On the covariant quantization of type-II superstrings
Energy Technology Data Exchange (ETDEWEB)
Guttenberg, Sebastian [Institut fuer Theoretische Physik, Technische Universitaet Wien, Wiedner Hauptstrasse 8-10, A-1040 Vienna (Austria)]. E-mail: basti@hep.itp.tuwien.ac.at; Knapp, Johanna; Kreuzer, Maximilian [Institut fuer Theoretische Physik, Technische Universitaet Wien, Wiedner Hauptstrasse 8-10, A-1040 Vienna (Austria)
2004-06-01
In a series of papers Grassi, Policastro, Porrati and van Nieuwenhuizen have introduced a new method to covariantly quantize the GS-superstring by constructing a resolution of the pure spinor constraint of Berkovits' approach. Their latest version is based on a gauged WZNW model and a definition of physical states in terms of relative cohomology groups. We first put the off-shell formulation of the type-II version of their ideas into a chirally split form and directly construct the free action of the gauged WZNW model, thus circumventing some complications of the super group manifold approach to type-II. Then we discuss the BRST charges that define the relative cohomology and the N=2 superconformal algebra. A surprising result is that nil potency of the BRST charge requires the introduction of another quartet of ghosts. (author)
Effective action for non-geometric fluxes duality covariant actions
Lee, Kanghoon; Rey, Soo-Jong; Sakatani, Yuho
2017-07-01
The (heterotic) double field theories and the exceptional field theories are manifestly duality covariant formulations, describing low-energy limit of various super-string and M-theory compactifications. These field theories are known to be reduced to the standard descriptions by introducing appropriately parameterized generalized metric and by applying suitably chosen section conditions. In this paper, we apply these formulations to non-geometric backgrounds. We introduce different parameterizations for the generalized metric in terms of the dual fields which are pertinent to non-geometric fluxes. Under certain simplifying assumptions, we construct new effective action for non-geometric backgrounds. We then study the non-geometric backgrounds sourced by exotic branes and find their U -duality monodromy matrices. The charge of exotic branes obtained from these monodromy matrices agrees with the charge obtained from the non-geometric flux integral.
An introduction to covariant quantum gravity and asymptotic safety
Percacci, Roberto
2017-01-01
This book covers recent developments in the covariant formulation of quantum gravity. Developed in the 1960s by Feynman and DeWitt, by the 1980s this approach seemed to lead nowhere due to perturbative non-renormalizability. The possibility of non-perturbative renormalizability or "asymptotic safety," originally suggested by Weinberg but largely ignored for two decades, was revived towards the end of the century by technical progress in the field of the renormalization group. It is now a very active field of research, providing an alternative to other approaches to quantum gravity. Written by one of the early contributors to this subject, this book provides a gentle introduction to the relevant ideas and calculational techniques. Several explicit calculations gradually bring the reader close to the current frontier of research. The main difficulties and present lines of development are also outlined.
A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator
Directory of Open Access Journals (Sweden)
Munir Ahmed
2016-06-01
Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.
A chiral covariant approach to ρρ scattering
Energy Technology Data Exchange (ETDEWEB)
Guelmez, D. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Meissner, U.G. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Institut fuer Kernphysik and Juelich Center for Hadron Physics, Institute for Advanced Simulation, Juelich (Germany); Oller, J.A. [Universidad de Murcia, Departamento de Fisica, Murcia (Spain)
2017-07-15
We analyze vector meson-vector meson scattering in a unitarized chiral theory based on a chiral covariant framework restricted to ρρ intermediate states. We show that a pole assigned to the scalar meson f{sub 0}(1370) can be dynamically generated from the ρρ interaction, while this is not the case for the tensor meson f{sub 2}(1270) as found in earlier work. We show that the generation of the tensor state is untenable due to the extreme non-relativistic kinematics used before. We further consider the effects arising from the coupling of channels with different orbital angular momenta which are also important. We suggest to use the formalism outlined here to obtain more reliable results for the dynamical generation of resonances in the vector-vector interaction. (orig.)
Lagrangian analysis, data covariance, and the impulse time integral
Energy Technology Data Exchange (ETDEWEB)
Forest, C.A.
1991-01-01
Lagrangian analysis is mathematical analysis of data derived from flow experiments in which embedded gauges move with the material motion (constant Lagrangian mass-point coordinate). With sufficient data, the conservation laws of mass, momentum, and energy are applied to the data in order to construct flow-variable fields, of particle velocity, stress, density, et cetera. Toward this end, a new Lagrangian analysis method has been constructed, that is centered upon a function, {alpha}, that incorporates conservation of mass and momentum into its definition. Further, the existence of {alpha} allows simultaneous, consistent, least-squares fitting of surfaces to all of the flow data. The method also incorporates a novel treatment of the data covariance effects resulting from gauge-to-gauge calibration uncertainty. Analysis of a synthetic data set illustrates the method. 8 refs., 8 figs.
Selecting groups of covariates in the elastic net
DEFF Research Database (Denmark)
Clemmensen, Line Katrine Harder
This paper introduces a novel method to select groups of variables in sparse regression and classication settings. The groups are formed based on the correlations between covariates and ensure that for example spatial or spectral relations are preserved without explicitly coding for these....... The preservation of relations gives increased interpretability. The method is based on the elastic net and adaptively selects highly correlated groups of variables and does therefore not waste time in grouping irrelevant variables for the problem at hand. The method is illustrated on a simulated data set...... and on regression of moisture content in multispectral images of sand. In both cases, the predictions were better or similar to existing regression and classication algorithms and the interpretation was enhanced using the grouping method. On top of that, the grouping method more consistently selects the important...
Eekhout, Iris; van de Wiel, Mark A; Heymans, Martijn W
2017-08-22
Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin's Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels significantly contributes to the model, different methods are available. For example pooling chi-square tests with multiple degrees of freedom, pooling likelihood ratio test statistics, and pooling based on the covariance matrix of the regression model. These methods are more complex than RR and are not available in all mainstream statistical software packages. In addition, they do not always obtain optimal power levels. We argue that the median of the p-values from the overall significance tests from the analyses on the imputed datasets can be used as an alternative pooling rule for categorical variables. The aim of the current study is to compare different methods to test a categorical variable for significance after multiple imputation on applicability and power. In a large simulation study, we demonstrated the control of the type I error and power levels of different pooling methods for categorical variables. This simulation study showed that for non-significant categorical covariates the type I error is controlled and the statistical power of the median pooling rule was at least equal to current multiple parameter tests. An empirical data example showed similar results. It can therefore be concluded that using the median of the p-values from the imputed data analyses is an attractive and easy to use alternative method for significance testing of categorical variables.
Toward a Mexican eddy covariance network for carbon cycle science
Vargas, Rodrigo; Yépez, Enrico A.
2011-09-01
First Annual MexFlux Principal Investigators Meeting; Hermosillo, Sonora, Mexico, 4-8 May 2011; The carbon cycle science community has organized a global network, called FLUXNET, to measure the exchange of energy, water, and carbon dioxide (CO2) between the ecosystems and the atmosphere using the eddy covariance technique. This network has provided unprecedented information for carbon cycle science and global climate change but is mostly represented by study sites in the United States and Europe. Thus, there is an important gap in measurements and understanding of ecosystem dynamics in other regions of the world that are seeing a rapid change in land use. Researchers met under the sponsorship of Red Temática de Ecosistemas and Consejo Nacional de Ciencia y Tecnologia (CONACYT) to discuss strategies to establish a Mexican eddy covariance network (MexFlux) by identifying researchers, study sites, and scientific goals. During the meeting, attendees noted that 10 study sites have been established in Mexico with more than 30 combined years of information. Study sites span from new sites installed during 2011 to others with 9 to 6 years of measurements. Sites with the longest span measurements are located in Baja California Sur (established by Walter Oechel in 2002) and Sonora (established by Christopher Watts in 2005); both are semiarid ecosystems. MexFlux sites represent a variety of ecosystem types, including Mediterranean and sarcocaulescent shrublands in Baja California; oak woodland, subtropical shrubland, tropical dry forest, and a grassland in Sonora; tropical dry forests in Jalisco and Yucatan; a managed grassland in San Luis Potosi; and a managed pine forest in Hidalgo. Sites are maintained with an individual researcher's funds from Mexican government agencies (e.g., CONACYT) and international collaborations, but no coordinated funding exists for a long-term program.
Covariation among glucocorticoid regulatory elements varies seasonally in house sparrows.
Liebl, Andrea L; Shimizu, Toru; Martin, Lynn B
2013-03-01
Glucocorticoids (GCs) help individuals cope with changes throughout life; one such change is the seasonal transition through life-history stages. Previous research shows that many animals exhibit seasonal variation in baseline GCs and GC responses to stressors, but the effects of season on other aspects of GC regulation have been less studied. Moreover, whether elements of GC regulation covary within individuals and whether covariation changes seasonally has been not been investigated. Evolutionarily, strong linkages among GC regulatory elements is predicted to enhance system efficiency and regulation, however may reduce the plasticity necessary to ensure appropriate responses under varying conditions. Here, we measured corticosterone (CORT), the major avian GC, at baseline, after exposure to a restraint stressor, and in response to dexamethasone (to assess negative feedback capacity) in wild house sparrows (Passer domesticus) during the breeding and molting seasons. We also measured hippocampal mRNA expression of the two receptors primarily responsible for CORT regulation: the mineralocorticoid and glucocorticoid receptors (MR and GR, respectively). Consistent with previous studies, restraint-induced CORT was lower during molt than breeding, but negative-feedback was not influenced by season. Receptor gene expression was affected by season, however, as during breeding, the ratio of MR to GR expression was significantly lower than during molt. Furthermore, MR expression was negatively correlated with CORT released in response to a stressor, but only during molt. We found that individuals that most strongly up-regulated CORT in response to restraint were also most effective at reducing CORT via negative feedback; although these relationships were independent of season, they were stronger during molt. Copyright © 2013 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Pigni, Marco T [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leal, Luiz C [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
Oak Ridge National Laboratory (ORNL) has recently completed the resonance parameter evaluation of four tungsten isotopes, i.e., ^{182,183,184,186}W, in the neutron energy range of thermal up to several keV. This nuclear data work was performed with support from the US Nuclear Criticality Safety Program (NCSP) in an effort to provide improved tungsten cross section and covariance data for criticality safety analyses. The evaluation methodology uses the Reich-Moore approximation of the R-matrix formalism of the code SAMMY to fit high-resolution measurements performed in 2010 and 2012 at the Geel linear accelerator facility (GELINA), as well as other experimental data sets on natural tungsten available in the EXFOR library. In the analyzed energy range, this work nearly doubles the resolved resonance region (RRR) present in the latest US nuclear data library ENDF/B-VII.1. In view of the interest in tungsten for distinct types of nuclear applications and the relatively homogeneous distribution of the isotopic tungsten—namely, ^{182}W(26.5%), ^{183}W(14.31%), ^{184}W(30.64%), and ^{186}W(28.43%) - the completion of these four evaluations represents a significant contribution to the improvement of the ENDF library. This paper presents an overview of the evaluated resonance parameters and related covariances for total and capture cross sections on the four tungsten isotopes.
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.
Guo, Muran; Chen, Tao; Wang, Ben
2017-05-16
Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.
Optimization of MIMO Systems Capacity Using Large Random Matrix Methods
Directory of Open Access Journals (Sweden)
Philippe Loubaton
2012-11-01
Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.
Matrix groups for undergraduates
Tapp, Kristopher
2016-01-01
Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe the basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, maximal tori, homogeneous spaces, and roots. This second edition includes two new chapters that allow for an easier transition to the general theory of Lie groups. From reviews of the First Edition: This book could be used as an excellent textbook for a one semester course at university and it will prepare students for a graduate course on Lie groups, Lie algebras, etc. … The book combines an intuitive style of writing w...
Extracellular matrix structure.
Theocharis, Achilleas D; Skandalis, Spyros S; Gialeli, Chrysostomi; Karamanos, Nikos K
2016-02-01
Extracellular matrix (ECM) is a non-cellular three-dimensional macromolecular network composed of collagens, proteoglycans/glycosaminoglycans, elastin, fibronectin, laminins, and several other glycoproteins. Matrix components bind each other as well as cell adhesion receptors forming a complex network into which cells reside in all tissues and organs. Cell surface receptors transduce signals into cells from ECM, which regulate diverse cellular functions, such as survival, growth, migration, and differentiation, and are vital for maintaining normal homeostasis. ECM is a highly dynamic structural network that continuously undergoes remodeling mediated by several matrix-degrading enzymes during normal and pathological conditions. Deregulation of ECM composition and structure is associated with the development and progression of several pathologic conditions. This article emphasizes in the complex ECM structure as to provide a better understanding of its dynamic structural and functional multipotency. Where relevant, the implication of the various families of ECM macromolecules in health and disease is also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Cross-covariance functions for multivariate random fields based on latent dimensions
Apanasovich, T. V.
2010-02-16
The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form. We focus on spatio-temporal cross-covariance functions that can be nonseparable, asymmetric and can have different covariance structures, for instance different smoothness parameters, in each component. We discuss estimation of these models and perform a small simulation study to demonstrate our approach. We illustrate our methodology on a trivariate spatio-temporal pollution dataset from California and demonstrate that our cross-covariance performs better than other competing models. © 2010 Biometrika Trust.
A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models
Bhattacharjee, Arnab
2004-01-01
This paper extends commonly used tests for equality of hazard rates in a two-sample or k-sample setup to a situation where the covariate under study is continuous. In other words, we test the hypothesis that the conditional hazard rate is the same for all covariate values, against the omnibus alternative as well as more specific alternatives, when the covariate is continuous. The tests developed are particularly useful for detecting trend in the underlying conditional hazard rates or chang...
A note on matrix differentiation
Kowal, Pawel
2006-01-01
This paper presents a set of rules for matrix differentiation with respect to a vector of parameters, using the flattered representation of derivatives, i.e. in form of a matrix. We also introduce a new set of Kronecker tensor products of matrices. Finally we consider a problem of differentiating matrix determinant, trace and inverse.
Soil Respiration in Eddy Covariance Footprints using Forced Diffusion
Nickerson, N.; Gabriel, C. E.; Creelman, C.
2016-12-01
Eddy covariance (EC) has been widely used across the globe for more than 20 years, offering researchers invaluable measurements of parameters including Net Ecosystem Exchange and ecosystem respiration. However, research suggests that EC assumptions and technical obstacles can cause biased gas exchange estimates. Measurements of soil respiration (RS) at the ground level may help alleviate these biases; for example, by allowing researchers to reconcile nocturnal EC flux data with RS or by providing a means to inform gap-filling models. RS measurements have been used sparingly alongside EC towers because of the large cost required to scale chamber systems to the EC footprint and data integration and processing burdens. Here we present the Forced Diffusion (FD) method for the measurement of RS at EC sites. The FD method allows for inexpensive and autonomous measurements, providing a scalable approach to matching the EC footprint compared to other RS systems. A pilot study at the Howland Forest AmeriFlux site was carried out from July 15, 2016 to Dec., 2016 using EC, custom-made automated chambers, and FD chambers in tandem. These results emphasize how RS measurements, like those from the eosFD, can identify decoupling of above and below canopy air masses and assist in informing and parameterizing gap-filling techniques. Uncertainty in nocturnal EC fluxes has been extensively characterized at Howland Forest with EC measurements spanning more than 20 years. Similarly, long term automated measurements of RS are also made at Howland, and have already been used to inform EC gap-filling models, making Howland the ideal site for such a study. This study has been designed to reproduce previous findings from Howland using the FD approach, aiming to demonstrate that the measurements taken using the eosFD correlate well with the existing chamber systems and can be used with equal efficacy to inform gap filling models or for other other eddy covariance QA/QC procedures, including
National Research Council Canada - National Science Library
Gwowen Shieh
2017-01-01
.... Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA...
The cellulose resource matrix.
Keijsers, Edwin R P; Yılmaz, Gülden; van Dam, Jan E G
2013-03-01
The emerging biobased economy is causing shifts from mineral fossil oil based resources towards renewable resources. Because of market mechanisms, current and new industries utilising renewable commodities, will attempt to secure their supply of resources. Cellulose is among these commodities, where large scale competition can be expected and already is observed for the traditional industries such as the paper industry. Cellulose and lignocellulosic raw materials (like wood and non-wood fibre crops) are being utilised in many industrial sectors. Due to the initiated transition towards biobased economy, these raw materials are intensively investigated also for new applications such as 2nd generation biofuels and 'green' chemicals and materials production (Clark, 2007; Lange, 2007; Petrus & Noordermeer, 2006; Ragauskas et al., 2006; Regalbuto, 2009). As lignocellulosic raw materials are available in variable quantities and qualities, unnecessary competition can be avoided via the choice of suitable raw materials for a target application. For example, utilisation of cellulose as carbohydrate source for ethanol production (Kabir Kazi et al., 2010) avoids the discussed competition with easier digestible carbohydrates (sugars, starch) deprived from the food supply chain. Also for cellulose use as a biopolymer several different competing markets can be distinguished. It is clear that these applications and markets will be influenced by large volume shifts. The world will have to reckon with the increase of competition and feedstock shortage (land use/biodiversity) (van Dam, de Klerk-Engels, Struik, & Rabbinge, 2005). It is of interest - in the context of sustainable development of the bioeconomy - to categorize the already available and emerging lignocellulosic resources in a matrix structure. When composing such "cellulose resource matrix" attention should be given to the quality aspects as well as to the available quantities and practical possibilities of processing the
Covariant Schrödinger semigroups on Riemannian manifolds
Güneysu, Batu
2017-01-01
This monograph discusses covariant Schrödinger operators and their heat semigroups on noncompact Riemannian manifolds and aims to fill a gap in the literature, given the fact that the existing literature on Schrödinger operators has mainly focused on scalar Schrödinger operators on Euclidean spaces so far. In particular, the book studies operators that act on sections of vector bundles. In addition, these operators are allowed to have unbounded potential terms, possibly with strong local singularities. The results presented here provide the first systematic study of such operators that is sufficiently general to simultaneously treat the natural operators from quantum mechanics, such as magnetic Schrödinger operators with singular electric potentials, and those from geometry, such as squares of Dirac operators that have smooth but endomorphism-valued and possibly unbounded potentials. The book is largely self-contained, making it accessible for graduate and postgraduate students alike. Since it also inc...
Electron scattering disintegration processes on light nuclei in covariant approach
Directory of Open Access Journals (Sweden)
Kuznietsov P.E.
2016-01-01
Full Text Available We provide general analysis of electro-break up process of compound scalar system. We use covariant approach with conserved EM current, which gives the ability to include strong interaction into QED. Therefore, we receive the ability to describe disintegration processes on nonlocal matter fields applying standard Feynman rules of QED. Inclusion of phase exponent into wave function receives a physical sense while we deal with the dominance of strong interaction in the process. We apply Green’s function (GF formalism to describe disintegration processes. Generalized gauge invariant electro-break up process amplitude is considered. One is a sum of traditional pole series and the regular part. We explore the deposits of regular part of amplitude, and its physical sense. A transition from virtual to real photon considered in photon point limit. The general analysis for electro-break up process of component scalar system is given. Precisely conserved nuclear electromagnetic currents at arbitrary square of transited momentum are received. The only undefined quantity in theory is vertex function. Therefore, we have the opportunity to describe electron scattering processes taking into account minimal necessary set of parameters.
Electron scattering disintegration processes on light nuclei in covariant approach
Kuznietsov, P. E.; Kasatkin, Yu. A.; Klepikov, V. F.
2016-07-01
We provide general analysis of electro-break up process of compound scalar system. We use covariant approach with conserved EM current, which gives the ability to include strong interaction into QED. Therefore, we receive the ability to describe disintegration processes on nonlocal matter fields applying standard Feynman rules of QED. Inclusion of phase exponent into wave function receives a physical sense while we deal with the dominance of strong interaction in the process. We apply Green's function (GF) formalism to describe disintegration processes. Generalized gauge invariant electro-break up process amplitude is considered. One is a sum of traditional pole series and the regular part. We explore the deposits of regular part of amplitude, and its physical sense. A transition from virtual to real photon considered in photon point limit. The general analysis for electro-break up process of component scalar system is given. Precisely conserved nuclear electromagnetic currents at arbitrary square of transited momentum are received. The only undefined quantity in theory is vertex function. Therefore, we have the opportunity to describe electron scattering processes taking into account minimal necessary set of parameters.
Covariance among multiple health risk behaviors in adolescents.
Directory of Open Access Journals (Sweden)
Kayla de la Haye
Full Text Available In a diverse group of early adolescents, this study explores the co-occurrence of a broad range of health risk behaviors: alcohol, cigarette, and marijuana use; physical inactivity; sedentary computing/gaming; and the consumption of low-nutrient energy-dense food. We tested differences in the associations of unhealthy behaviors over time, and by gender, race/ethnicity, and socioeconomic status.Participants were 8360 students from 16 middle schools in California (50% female; 52% Hispanic, 17% Asian, 16% White, and 15% Black/multiethnic/other. Behaviors were measured with surveys in Spring 2010 and Spring 2011. Confirmatory factor analysis was used to assess if an underlying factor accounted for the covariance of multiple behaviors, and composite reliability methods were used to determine the degree to which behaviors were related.The measured behaviors were explained by two moderately correlated factors: a 'substance use risk factor' and an 'unhealthy eating and sedentary factor'. Physical inactivity did not reflect the latent factors as expected. There were few differences in the associations among these behaviors over time or by demographic characteristics.Two distinct, yet related groups of health compromising behaviors were identified that could be jointly targeted in multiple health behavior change interventions among early adolescents of diverse backgrounds.
Weighted mean method for eddy covariance flux measurement
Kim, W.; Cho, J.; Seo, H.; Oki, T.
2013-12-01
The study to monitor the exchange of energy, water vapor and carbon dioxide between the atmosphere and terrestrial ecosystem has been carried out with eddy covariance method throughout the world. The monitored exchange quantity, named flux F , is conventionally determined by a mean of 1 hr or 30 min interval because no technique have been fortified to directly measure a momentary F itself at an instant of time. Therefore, the posterior analysis with this sampling should be paid attention to those spatial or temporal averaging and summation in the consideration of the sampling uncertainty. In particular, the averaging calcurated by arithmetic mean Fa might be inappropriate because the sample F used in this averaging has nonidentical inherent quality within one another according to different micrometeorological and ecophysiological conditions while those are observed under the same instruments. To overcome this issue, we propose the weighted mean Fw using a relative sampling error estimated by a sampling F and its error, and introduce Fw performance tested with EC measurements for 3 years at tangerine orchard.
Photonic quantum simulator for unbiased phase covariant cloning
Knoll, Laura T.; López Grande, Ignacio H.; Larotonda, Miguel A.
2018-01-01
We present the results of a linear optics photonic implementation of a quantum circuit that simulates a phase covariant cloner, using two different degrees of freedom of a single photon. We experimentally simulate the action of two mirrored 1→ 2 cloners, each of them biasing the cloned states into opposite regions of the Bloch sphere. We show that by applying a random sequence of these two cloners, an eavesdropper can mitigate the amount of noise added to the original input state and therefore, prepare clones with no bias, but with the same individual fidelity, masking its presence in a quantum key distribution protocol. Input polarization qubit states are cloned into path qubit states of the same photon, which is identified as a potential eavesdropper in a quantum key distribution protocol. The device has the flexibility to produce mirrored versions that optimally clone states on either the northern or southern hemispheres of the Bloch sphere, as well as to simulate optimal and non-optimal cloning machines by tuning the asymmetry on each of the cloning machines.
Worldline construction of a covariant chiral kinetic theory
Mueller, Niklas; Venugopalan, Raju
2017-07-01
We discuss a novel worldline framework for computations of the chiral magnetic effect (CME) in ultrarelativistic heavy-ion collisions. Starting from the fermion determinant in the QCD effective action, we show explicitly how its real part can be expressed as a supersymmetric worldline action of spinning, colored, Grassmannian particles in background fields. Restricting ourselves for simplicity to spinning particles, we demonstrate how their constrained Hamiltonian dynamics arises for both massless and massive particles. In a semiclassical limit, this gives rise to the covariant generalization of the Bargmann-Michel-Telegdi equation; the derivation of the corresponding Wong equations for colored particles is straightforward. In a previous paper [N. Mueller and R. Venugopalan, arXiv:1701.03331.], we outlined how Berry's phase arises in a nonrelativistic adiabatic limit for massive particles. We extend the discussion here to systems with a finite chemical potential. We discuss a path integral formulation of the relative phase in the fermion determinant that places it on the same footing as the real part. We construct the corresponding anomalous worldline axial-vector current and show in detail how the chiral anomaly appears. Our work provides a systematic framework for a relativistic kinetic theory of chiral fermions in the fluctuating topological backgrounds that generate the CME in a deconfined quark-gluon plasma. We outline some further applications of this framework in many-body systems.
On covariant Poisson brackets in classical field theory
Energy Technology Data Exchange (ETDEWEB)
Forger, Michael [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Salles, Mário O. [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Centro de Ciências Exatas e da Terra, Universidade Federal do Rio Grande do Norte, Campus Universitário – Lagoa Nova, BR–59078-970 Natal, RN (Brazil)
2015-10-15
How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.
Covariance adjustment for batch effect in gene expression data.
Lee, Jung Ae; Dobbin, Kevin K; Ahn, Jeongyoun
2014-07-10
Batch bias has been found in many microarray gene expression studies that involve multiple batches of samples. A serious batch effect can alter not only the distribution of individual genes but also the inter-gene relationships. Even though some efforts have been made to remove such bias, there has been relatively less development on a multivariate approach, mainly because of the analytical difficulty due to the high-dimensional nature of gene expression data. We propose a multivariate batch adjustment method that effectively eliminates inter-gene batch effects. The proposed method utilizes high-dimensional sparse covariance estimation based on a factor model and a hard thresholding. Another important aspect of the proposed method is that if it is known that one of the batches is produced in a superior condition, the other batches can be adjusted so that they resemble the target batch. We study high-dimensional asymptotic properties of the proposed estimator and compare the performance of the proposed method with some popular existing methods with simulated data and gene expression data sets. Copyright © 2014 John Wiley & Sons, Ltd.
A general field-covariant formulation of quantum field theory
Energy Technology Data Exchange (ETDEWEB)
Anselmi, Damiano [Universita di Pisa, Dipartimento di Fisica ' ' Enrico Fermi' ' , Pisa (Italy)
2013-03-15
In all nontrivial cases renormalization, as it is usually formulated, is not a change of integration variables in the functional integral, plus parameter redefinitions, but a set of replacements, of actions and/or field variables and parameters. Because of this, we cannot write simple identities relating bare and renormalized generating functionals, or generating functionals before and after nonlinear changes of field variables. In this paper we investigate this issue and work out a general field-covariant approach to quantum field theory, which allows us to treat all perturbative changes of field variables, including the relation between bare and renormalized fields, as true changes of variables in the functional integral, under which the functionals Z and W=lnZ behave as scalars. We investigate the relation between composite fields and changes of field variables, and we show that, if J are the sources coupled to the elementary fields, all changes of field variables can be expressed as J-dependent redefinitions of the sources L coupled to the composite fields. We also work out the relation between the renormalization of variable-changes and the renormalization of composite fields. Using our transformation rules it is possible to derive the renormalization of a theory in a new variable frame from the renormalization in the old variable frame, without having to calculate it anew. We define several approaches, useful for different purposes, in particular a linear approach where all variable changes are described as linear source redefinitions. We include a number of explicit examples. (orig.)
Covariant Spectator Theory of np scattering: Isoscalar interaction currents
Energy Technology Data Exchange (ETDEWEB)
Gross, Franz L. [JLAB
2014-06-01
Using the Covariant Spectator Theory (CST), one boson exchange (OBE) models have been found that give precision fits to low energy $np$ scattering and the deuteron binding energy. The boson-nucleon vertices used in these models contain a momentum dependence that requires a new class of interaction currents for use with electromagnetic interactions. Current conservation requires that these new interaction currents satisfy a two-body Ward-Takahashi (WT), and using principals of {\\it simplicity\\/} and {\\it picture independence\\/}, these currents can be uniquely determined. The results lead to general formulae for a two-body current that can be expressed in terms of relativistic $np$ wave functions, ${\\it \\Psi}$, and two convenient truncated wave functions, ${\\it \\Psi}^{(2)}$ and $\\widehat {\\it \\Psi}$, which contain all of the information needed for the explicit evaluation of the contributions from the interaction current. These three wave functions can be calculated from the CST bound or scattering state equations (and their off-shell extrapolations). A companion paper uses this formalism to evaluate the deuteron magnetic moment.
Empirical likelihood for cumulative hazard ratio estimation with covariate adjustment.
Dong, Bin; Matthews, David E
2012-06-01
In medical studies, it is often of scientific interest to evaluate the treatment effect via the ratio of cumulative hazards, especially when those hazards may be nonproportional. To deal with nonproportionality in the Cox regression model, investigators usually assume that the treatment effect has some functional form. However, to do so may create a model misspecification problem because it is generally difficult to justify the specific parametric form chosen for the treatment effect. In this article, we employ empirical likelihood (EL) to develop a nonparametric estimator of the cumulative hazard ratio with covariate adjustment under two nonproportional hazard models, one that is stratified, as well as a less restrictive framework involving group-specific treatment adjustment. The asymptotic properties of the EL ratio statistic are derived in each situation and the finite-sample properties of EL-based estimators are assessed via simulation studies. Simultaneous confidence bands for all values of the adjusted cumulative hazard ratio in a fixed interval of interest are also developed. The proposed methods are illustrated using two different datasets concerning the survival experience of patients with non-Hodgkin's lymphoma or ovarian cancer. © 2011, The International Biometric Society.
Profiles of aggression among psychiatric patients. II. Covariates and predictors.
Kay, S R; Wolkenfeld, F; Murrill, L M
1988-09-01
An Aggression Risk Profile was developed as an objective multidimensional scale for characterizing aggressive psychiatric patients and predicting verbal, physical, and general manifestations of aggression. Based on earlier studies, the 39-item Aggression Risk Profile incorporated demographic, diagnostic, historical, and clinical parameters. Its reliability, discriminative validity, and predictive validity were supported in its application to a total of 208 inpatients. Aggressive patients were more often found to be men, to be diagnosed with organic mental syndrome or substance abuse disorder, and to be notable in history of aggression. They tended to be angry and excitable but not more floridly ill than control subjects. The contemporaneous covariates of aggression, however, were not the same as the predictors, as determined by 3-month prospective follow-up. Twelve significant predictors were identified, and multiple regression analysis revealed different sets of measures that explain 45.0% to 52.5% of the variance for verbal, physical, and total aggression. The most reliable predictors were younger age, shorter length of illness, hostility, depression, anger, and difficulty in delaying gratification. We concluded that prediction is augmented by the combination of clinical and nonclinical predictors, and we discussed likely sources of disparity in previous research.
Covariant gaussian approximation in Ginzburg-Landau model
Wang, J. F.; Li, D. P.; Kao, H. C.; Rosenstein, B.
2017-05-01
Condensed matter systems undergoing second order transition away from the critical fluctuation region are usually described sufficiently well by the mean field approximation. The critical fluctuation region, determined by the Ginzburg criterion, | T /Tc - 1 | ≪ Gi, is narrow even in high Tc superconductors and has universal features well captured by the renormalization group method. However recent experiments on magnetization, conductivity and Nernst effect suggest that fluctuations effects are large in a wider region both above and below Tc. In particular some ;pseudogap; phenomena and strong renormalization of the mean field critical temperature Tmf can be interpreted as strong fluctuations effects that are nonperturbative (cannot be accounted for by ;gaussian fluctuations;). The physics in a broader region therefore requires more accurate approach. Self consistent methods are generally ;non-conserving; in the sense that the Ward identities are not obeyed. This is especially detrimental in the symmetry broken phase where, for example, Goldstone bosons become massive. Covariant gaussian approximation remedies these problems. The Green's functions obey all the Ward identities and describe the fluctuations much better. The results for the order parameter correlator and magnetic penetration depth of the Ginzburg-Landau model of superconductivity are compared with both Monte Carlo simulations and experiments in high Tc cuprates.
Deift, Percy
2009-01-01
This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive
Directory of Open Access Journals (Sweden)
Abdelhakim Chillali
2017-05-01
Full Text Available In classical cryptography, the Hill cipher is a polygraphic substitution cipher based on linear algebra. In this work, we proposed a new problem applicable to the public key cryptography, based on the Matrices, called “Matrix discrete logarithm problem”, it uses certain elements formed by matrices whose coefficients are elements in a finite field. We have constructed an abelian group and, for the cryptographic part in this unreliable group, we then perform the computation corresponding to the algebraic equations, Returning the encrypted result to a receiver. Upon receipt of the result, the receiver can retrieve the sender’s clear message by performing the inverse calculation.
Matrix string partition function
Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre
1998-01-01
We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.
Eisenman, Richard L
2005-01-01
This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur
Matrix algebra for linear models
Gruber, Marvin H J
2013-01-01
Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f
A Computer Aided Statistical Covariance Program for Missile System Analysis
1974-04-01
Z.*% DOG *WQG 12 TMP7 - PYIKI*1WQ1*WQI/T4P3 13 C 14 C s*CONSTAUJ #At MATRIX ELEMENTS 15 C 16 A(1,1) z-3.*TAUZ 1? A2(1,2) = AIJZ*A2(Lo1) 18 A2(1,3...33t33J 19 COMMON /VMG9/JUNK tVT IMELiVT: X’- A".0IS O NMMv N01NAL 20 COMMON IaLOCKIIP(33,331 oOPg33,33) ,DP9(33,33) 21 COMMON /bLOCK2/ A2(33,33
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Robust estimation of the correlation matrix of longitudinal data
Maadooliat, Mehdi
2011-09-23
We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL⊤D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD2L⊤ using simulations and a real dataset. © 2011 Springer Science+Business Media, LLC.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
R-matrix Analysis of the 239Pu Neutron Cross Sections
de Saussure, G.; Perez, R. B.; Macklin, R. L.
Pu-239 neutron cross-section data in the resolved resonance region were analyzed with the R-Matrix Bayesian Program SAMMY. Below 30 eV the cross sections computed with the multilevel parameters are consistent with recent fission and transmission measurements as well as with older capture and alpha measurements. Above 30 eV no suitable transmission data were available and only fission cross-section measurements were analyzed. However, since the analysis conserves the complete covariance matrix, the analysis can be updated by the Bayes method as transmission measurements become available. To date, the analysis of the fission measurements was completed up to 300 eV.
On the Question of the Scattering Matrix in Quantum Field Theory
Khokhlov, I. A.
2017-06-01
A new expression for the scattering matrix in local quantum field theory is obtained on the basis of a generalized formulation of the microcausality condition. This expression is relativistically covariant, unitary, and causal and coincides with the generally accepted expression (the T-exponential) in the case when ultraviolet (UV) divergences are absent in the latter. Conditions are formulated, under the fulfillment of which the elements of the constructed matrix do not contain UV divergences. By way of an example, the probability amplitude of the particle-particle transition for a model with ℒ( x) = λ : φ 3( x): in the second-order perturbation theory is found.
A neural circuit covarying with social hierarchy in macaques.
Directory of Open Access Journals (Sweden)
MaryAnn P Noonan
2014-09-01
Full Text Available Despite widespread interest in social dominance, little is known of its neural correlates in primates. We hypothesized that social status in primates might be related to individual variation in subcortical brain regions implicated in other aspects of social and emotional behavior in other mammals. To examine this possibility we used magnetic resonance imaging (MRI, which affords the taking of quantitative measurements noninvasively, both of brain structure and of brain function, across many regions simultaneously. We carried out a series of tests of structural and functional MRI (fMRI data in 25 group-living macaques. First, a deformation-based morphometric (DBM approach was used to show that gray matter in the amygdala, brainstem in the vicinity of the raphe nucleus, and reticular formation, hypothalamus, and septum/striatum of the left hemisphere was correlated with social status. Second, similar correlations were found in the same areas in the other hemisphere. Third, similar correlations were found in a second data set acquired several months later from a subset of the same animals. Fourth, the strength of coupling between fMRI-measured activity in the same areas was correlated with social status. The network of subcortical areas, however, had no relationship with the sizes of individuals' social networks, suggesting the areas had a simple and direct relationship with social status. By contrast a second circuit in cortex, comprising the midsuperior temporal sulcus and anterior and dorsal prefrontal cortex, covaried with both individuals' social statuses and the social network sizes they experienced. This cortical circuit may be linked to the social cognitive processes that are taxed by life in more complex social networks and that must also be used if an animal is to achieve a high social status.
Regional Scaling of Airborne Eddy Covariance Flux Observation
Sachs, T.; Serafimovich, A.; Metzger, S.; Kohnert, K.; Hartmann, J.
2014-12-01
The earth's surface is tightly coupled to the global climate system by the vertical exchange of energy and matter. Thus, to better understand and potentially predict changes to our climate system, it is critical to quantify the surface-atmosphere exchange of heat, water vapor, and greenhouse gases on climate-relevant spatial and temporal scales. Currently, most flux observations consist of ground-based, continuous but local measurements. These provide a good basis for temporal integration, but may not be representative of the larger regional context. This is particularly true for the Arctic, where site selection is additionally bound by logistical constraints, among others. Airborne measurements can overcome this limitation by covering distances of hundreds of kilometers over time periods of a few hours. The Airborne Measurements of Methane Fluxes (AIRMETH) campaigns are designed to quantitatively and spatially explicitly address this issue: The research aircraft POLAR 5 is used to acquire thousands of kilometers of eddy-covariance flux data. During the AIRMETH-2012 and AIRMETH-2013 campaigns we measured the turbulent exchange of energy, methane, and (in 2013) carbon dioxide over the North Slope of Alaska, USA, and the Mackenzie Delta, Canada. Here, we present the potential of environmental response functions (ERFs) for quantitatively linking flux observations to meteorological and biophysical drivers in the flux footprints. We use wavelet transforms of the original high-frequency data to improve spatial discretization of the flux observations. This also enables the quantification of continuous and biophysically relevant land cover properties in the flux footprint of each observation. A machine learning technique is then employed to extract and quantify the functional relationships between flux observations and the meteorological and biophysical drivers. The resulting ERFs are used to extrapolate fluxes over spatio-temporally explicit grids of the study area. The
Jump robust two time scale covariance estimation and realized volatility budgets
Boudt, K.M.R.; Zhang, J.
2015-01-01
We estimate the daily integrated variance and covariance of stock returns using high-frequency data in the presence of jumps, market microstructure noise and non-synchronous trading. For this we propose jump robust two time scale (co)variance estimators and verify their reduced bias and mean square
The Importance of Covariate Selection in Controlling for Selection Bias in Observational Studies
Steiner, Peter M.; Cook, Thomas D.; Shadish, William R.; Clark, M. H.
2010-01-01
The assumption of strongly ignorable treatment assignment is required for eliminating selection bias in observational studies. To meet this assumption, researchers often rely on a strategy of selecting covariates that they think will control for selection bias. Theory indicates that the most important covariates are those highly correlated with…
Integrated covariance estimation using high-frequency data in the presence of noise
DEFF Research Database (Denmark)
Voev, Valeri; Lunde, Asger
2007-01-01
We analyze the effects of nonsynchronicity and market microstructure noise on realized covariance type estimators. Hayashi and Yoshida (2005) propose a simple estimator that resolves the problem of nonsynchronicity and is unbiased and consistent for the integrated covariance in the absence of noise...
On Inclusion of Covariates for Class Enumeration of Growth Mixture Models
Li, Libo; Hser, Yih-Ing
2011-01-01
In this article, we directly question the common practice in growth mixture model (GMM) applications that exclusively rely on the fitting model without covariates for GMM class enumeration. We provide theoretical and simulation evidence to demonstrate that exclusion of covariates from GMM class enumeration could be problematic in many cases. Based…
Robustness studies in covariance structure modeling - An overview and a meta-analysis
Hoogland, Jeffrey J.; Boomsma, A
In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the
Algal-bacterial co-variation in streams: a cross-stream comparison
Xueqing Gao; Ola A. Olapade; Mark W. Kershner; Laura G. Leff
2004-01-01
Algal-bacterial co-variation has been frequently observed in lentic and marine environments, but the existence of such relationships in lotic ecosystems is not well established. To examine possible co-variation, bacterial number and chlorophyll-a concentration in water and sediments of nine streams from different regions in the USA were examined. In the water, a strong...
Hamiltonian approach to GR - Part 1: covariant theory of classical gravity
Cremaschini, Claudio
2016-01-01
A challenging issue in General Relativity concerns the determination of the manifestly-covariant continuum Hamiltonian structure underlying the Einstein field equations and the related formulation of the corresponding covariant Hamilton-Jacobi theory. The task is achieved by adopting a synchronous variational principle requiring distinction between the prescribed deterministic metric tensor $\\hat{g}(r)\\equiv \\left\\{ \\hat{g}_{\\mu \
Cortisol covariation within parents of young children: Moderation by relationship aggression.
Saxbe, Darby E; Adam, Emma K; Schetter, Christine Dunkel; Guardino, Christine M; Simon, Clarissa; McKinney, Chelsea O; Shalowitz, Madeleine U
2015-12-01
Covariation in diurnal cortisol has been observed in several studies of cohabiting couples. In two such studies (Liu et al., 2013; Saxbe and Repetti, 2010), relationship distress was associated with stronger within-couple correlations, suggesting that couples' physiological linkage with each other may indicate problematic dyadic functioning. Although intimate partner aggression has been associated with dysregulation in women's diurnal cortisol, it has not yet been tested as a moderator of within-couple covariation. This study reports on a diverse sample of 122 parents who sampled salivary cortisol on matched days for two years following the birth of an infant. Partners showed strong positive cortisol covariation. In couples with higher levels of partner-perpetrated aggression reported by women at one year postpartum, both women and men had a flatter diurnal decrease in cortisol and stronger correlations with partners' cortisol sampled at the same timepoints. In other words, relationship aggression was linked both with indices of suboptimal cortisol rhythms in both members of the couples and with stronger within-couple covariation coefficients. These results persisted when relationship satisfaction and demographic covariates were included in the model. During some of the sampling days, some women were pregnant with a subsequent child, but pregnancy did not significantly moderate cortisol levels or within-couple covariation. The findings suggest that couples experiencing relationship aggression have both suboptimal neuroendocrine profiles and stronger covariation. Cortisol covariation is an understudied phenomenon with potential implications for couples' relationship functioning and physical health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Estimation of covariances of Cr and Ni neutron nuclear data in JENDL-3.2
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Oh, Soo Youl [Korea Atomic Energy Research Institute, Taejon (Korea)
2000-02-01
Covariances of nuclear data have been estimated for 2 nuclides contained in JENDL-3.2. The nuclides considered are Cr and Ni, which are regarded as important for the nuclear design study of fast reactors. The physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. The least-squares fitting code GMA was used in estimating covariances for reactions of which JENDL-3.2 cross sections had been evaluated by taking account of measurements. Covariances of nuclear model calculations were deduced by using the KALMAN system. The covariance data obtained were compiled in the ENDF-6 format, and will be put into the JENDL-3.2 Covariance File which is one of JENDL special purpose files. (author)
Nimon, Kim; Henson, Robin K.
2015-01-01
The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…